 # Learn Data Structure and Algorithm in JavaScript | Part 10 Edison Pebojot(👨‍💻) Updated on ・24 min read

# Prerequisite (✋😐)

If you're reading this article right now, please considered to read our Part 01: Big-O Notation, Part 02: JavaScript Unique Part, Part 03: JavaScript Numbers to Part 09: Sets

01 Big-O Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed).
02 JavaScript Unique Parts Big-O is important for analyzing and comparing the efficiencies of algorithms. The analysis of Big-O starts by looking at the code, and, applying the rules, applying the rules is because to simplify the Big-O notation linear or quadratic rule is not enough.
03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation.
04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String object’s built-in functions. You will learn how to access, compare, decompose, and search strings for commonly used real-life purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption.
05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of built-in methods. By the end of this part, you will understand arrays and choose the right method
06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short.
07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(❗) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems.
08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(😉)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms.
09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (💡).
10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (👀). (😃)
11 Hash Tables A hash table is a fixed-sized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (😃)
12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (🔈)) not (kwewe) okay? hehehe (😅); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (😃) Let's go! (🔥🔥🔥)
13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (😮) There are two types of linked lists: singly (➡️) and doubly (↔️). Let’s examine the singly linked list first.(😃) Let's go! (🔥🔥🔥)
14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid re-reading the hard drive, and a web browser caches web images to avoid re-downloading. In this part 14, two caching techniques will discussed: LFU and LRU caching.
15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and self-balancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and self-balancing binary search trees to understand how to store searchable data. (😃)
16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps.
17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (👍)
18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (😉)
19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (⬇️). To explain dynamic programming, let’s re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (😉)
20 Bit Manipulation Bit manipulation is an advanced topic that JavaScript developers typically do not need to know. However, you should learn a bit about bit manipulation if you want to implement high-performance server-side code. Understanding bit manipulation requires some knowledge of digital logic. Any introductory course in discrete math or circuits would be helpful to understand these concepts.

# Part 10: Searching and Sorting Algorithm (😱 🔥 🔍🌀) Searching data and sorting through data are fundamental algorithms. Searching (🔍) refers to iterating over the data structure’s elements to retrieve some data. Sorting (📊) refers to putting the data structure’s elements in order. The searching and sorting algorithms are different.

This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (👀). (😃)

## Searching (🔎🔍) As mentioned, searching is the task of looking for a specific element inside a data structure. When searching in an array, there are two main techniques: linear and binary searching

Linear searches are flexible because they can be used with both
sorted and unsorted data. Binary searches are used with sorted data. But Remember: a linear search has a higher time complexity than a binary search. Okay? (😄👍)

### Linear Search (🔎➡️🔢)

A linear search works by going through each element of the array one index after another. The following code example is an implementation of a linear search to find out whether 6 and 10 exist within the array:

Example:

//iterate through the array and find
function linearSearch(array, n) {
for (var i = 0; i < array.length; i++) {
if (array[i] == n) {
return true;
}
}
return false;
}
linearSearch([1, 2, 3, 4, 5, 6, 7, 8, 9], 6); // true
linearSearch([1, 2, 3, 4, 5, 6, 7, 8, 9], 10); // false


Execution:

   //iterate through the array and find function linearSearch(array, n) { for (var i = 0; i < array.length; i++) { if (array[i] == n) { return true; } } return false; } console.log(linearSearch([1, 2, 3, 4, 5, 6, 7, 8, 9], 6)); // true console.log(linearSearch([1, 2, 3, 4, 5, 6, 7, 8, 9], 10)); // false 

Time Complexity: $O\lparen n\rparen$

Here's another example to clarify linear search:

As shown in Figure 10-1, when 6 is searched, it goes six iterations. When 10 is searched, it must iterate all elements( $n$ ) before returning false; therefore, the time complexity is $O\lparen n\rparen$ .

As another example, with an array of [1, 2, 3, 4, 5] and a search term of 3, it would take three iterations([1, 2, 3, 4, 5]). The reason why this algorithm has a Big-O of $O\lparen n\rparen$ is that, the entire array needs to be iterated. For example, if the search term is 5, it takes five iterations ([1, 2, 3, 4, 5]). If 6 is the search term, it goes through the entire array ([1, 2, 3, 4, 5]).

As noted previously, a linear search algorithm like this works
whether or not the array is sorted. In a linear search algorithm, every element of the array is checked. So, you should use a linear search when the array is not sorted. If the array is sorted, you can do a binary search. Which we will discuss in the next section. Let's go! (🔥🔥🔥)

### Binary Search (🔎🔢🔍)

Binary search is a searching algorithm that works on sorted data. Binary searches check the middle value to see whether the desired value is greater(▶️) or smaller(◀️) than it. If the desired value is smaller, this algorithm can search the smaller parts, or it can search the bigger parts if the desired value is bigger. Let's see an example:

Figure 10-2 illustrates the process of a binary search. First, the search range is 1 to 9. Since 5, is bigger than 3, the search range is 1 to 4. Finally, 3 is found. Let's look at another example below:

Figure 10-3 illustrates searching for an item in the right of the array. The following code implements the binary search algorithm:

Example:

function binarySearch(array, n) {
var MiddleElement = Math.floor(array.length - 1 / 2); // 1 (Middle)
var True = true;

while (True) {
// 2 == 2
if (n == array[MiddleElement]) {
True = false;
return array[MiddleElement];

} else if (n > array[MiddleElement]) {
MiddleElement = MiddleElement + 1;

} else if (n < array[MiddleElement]) {
MiddleElement = MiddleElement - 1;
} else {
return -1;
}
}
}
binarySearch([1, 2, 3, 4], 4); // Prints '4'
binarySearch([1, 2, 3, 4], 5); // Prints '-1'


Execution:

   function binarySearch(array, n) { var MiddleElement = Math.floor(array.length - 1 / 2); // 1 (Middle) var True = true; while (True) { // 2 == 2 if (n == array[MiddleElement]) { True = false; return array[MiddleElement]; } else if (n > array[MiddleElement]) { MiddleElement = MiddleElement + 1; } else if (n < array[MiddleElement]) { MiddleElement = MiddleElement - 1; } else { return -1; } } } console.log(binarySearch([1, 2, 3, 4], 4)); // Prints '4' console.log(binarySearch([1, 2, 3, 4], 5)); // Prints '-1' 

The binary search algorithm is fast but can be done only if the array is sorted. If the search element is bigger than the middle element, then middle element plus one. And if the search element is less than the middle element, then middle element minus one.

If the element is smaller than the middle element, it should look in the lower half; if the element is bigger than the middle element, it should look in the upper half. Get it? Don't worry you are more likely to get it in challenge section. (😊)

## Sorting (📊📊) Sorting is one of the most important topics in computer science; it is easier to locate items in a sorted array than in an unsorted array. In this section, different sorting techniques will be explored. We will start with the naive(primative) sorting algorithms and then explore efficient sorting algorithms. I recommend to watch the short clip video (🎥) to solidify your understanding in the following (😃).

### Bubble Sort

Bubble sorting simply swaps elements if one is bigger than the other. Here's a video to simplify of what bubble sort is (🎬[Length:0:58 Second]):

Now look at the image below to solidify your understanding: swap function simply switches two array element values and will be used as a helper function:

Example:

function swap(array, index1, index2) {
var temp = array[index1];
array[index1] = array[index2];
array[index2] = temp;
}


Now, the following bubbleSort code block illustrates the bubble sort algorithm:

Note (📝): Don't forget to add the Swap helper function from bubbleSort code block just above it:

Example:

function bubbleSort(array) {
for (var i = 0, arrayLength = array.length; i < arrayLength; i++) {
for (var j = 0; j <= i; j++) {
if (array[i] < array[j]) {
swap(array, i, j);
}
}
}
return array;
}
bubbleSort([6, 1, 2, 3, 4, 5]); // [1,2,3,4,5,6]


Note (📝): Again, don't forget to add the Swap helper function from bubbleSort code block just above it:

Execution:

   // Paste Swap helper function here: // Code here... function bubbleSort(array) { for (var i = 0, arrayLength = array.length; i < arrayLength; i++) { for (var j = 0; j <= i; j++) { if (array[i] < array[j]) { swap(array, i, j); } } } return array; } bubbleSort([6, 1, 2, 3, 4, 5]); // [1,2,3,4,5,6] 

Time Complexity: $O\lparen n^{2}\rparen$
Space Complexity: $O\lparen 1\rparen$

Bubble sort is the worst type of sort because it compares every pair possible. And because bubble sort uses nested loops, it has a time complexity of $O\lparen n^{2}\rparen$ .

### Selection Sort

Selection sorting works by scanning the smallest element and inserting it into the current position of the array. This algorithm is marginally(slightly) better than bubble sort. Here's a video to simplify of what selection sort is (🎬[Length:1:36 Second]):

Now look at the image below to solidify your understanding: Figure 10-6 shows this selection process. The following code implements the selection sort. In the code, there is one for loop to iterate through the array and one nested for loop to scan to get the minimum element:

Example:

function selectionSort(items) {
var len = items.length,
min;

for (var i = 0; i < len; i++) {
// set minimum to this position
min = i;
//check the rest of the array to see if anything is smaller
for (var j = i + 1; j < len; j++) {
if (items[j] < items[min]) {
min = j;
}
}
//if the minimum isn't in the position, swap it
if (i != min) {
swap(items, i, min);
}
}

return items;
}
selectionSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2]


Execution:

   // Paste Swap helper function here: // Code here... function selectionSort(items) { var len = items.length, min; for (var i = 0; i < len; i++) { // set minimum to this position min = i; //check the rest of the array to see if anything is smaller for (var j = i + 1; j < len; j++) { if (items[j] < items[min]) { min = j; } } //if the minimum isn't in the position, swap it if (i != min) { swap(items, i, min); } } return items; } selectionSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2] 

Time Complexity: $O\lparen n^{2}\rparen$
Space Complexity: $O\lparen 1\rparen$

The time complexity for selection sort is still $O\lparen n^{2}\rparen$ because of the nested for loop.

### Insertion Sort

Insertion sort works similarly to selection sort by moving the unsorted items into a sorted sublist on the left side of the array. Here's a video to simplify of what Insertion sort is (🎬[Length:1:42 Second]):

Now look at the image below to solidify your understanding: Figure 10-7 shows this process in detail. The following code implements the insertion sort algorithm. The outer for loop iterates over the array and the inner for loop moves the unsorted items into the sorted sublist on the left side of the array:

Example:

function insertionSort(items) {
var len = items.length, // number of items in the array
value, // the value currently being compared
i, // index into unsorted section
j; // index into sorted section

for (i = 0; i < len; i++) {
// store the current value because it may shift later
value = items[i];

// Whenever the value in the sorted section is greater than the value
// in the unsorted section, shift all items in the sorted section
// over by one. This creates space in which to insert the value.

for (j = i - 1; j > -1 && items[j] > value; j--) {
items[j + 1] = items[j];
}
items[j + 1] = value;
}
return items;
}
insertionSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2]


Execution:

   function insertionSort(items) { var len = items.length, // number of items in the array value, // the value currently being compared i, // index into unsorted section j; // index into sorted section for (i = 0; i < len; i++) { // store the current value because it may shift later value = items[i]; // Whenever the value in the sorted section is greater than the value // in the unsorted section, shift all items in the sorted section // over by one. This creates space in which to insert the value. for (j = i - 1; j > -1 && items[j] > value; j--) { items[j + 1] = items[j]; } items[j + 1] = value; } return items; } insertionSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2] 

Time Complexity: $O\lparen n^{2}\rparen$
Space Complexity: $O\lparen 1\rparen$

Again, this sorting algorithm has a quadratic time complexity of $O\lparen n^{2}\rparen$ like bubble and insertion sort because of the nested for loop.

### Quicksort

Quicksort works by obtaining a pivot and partitioning the array around it until everything is sorted. The ideal pivot is the median of the array since it will partition the array evenly but getting the median of an unsorted array. Hence, a pivot is typically obtained by taking the median value of the first, middle, and last elements.

This sort is a recursive one and uses the divide-and-conquer methodology to break the quadratic complexity and get the time complexity down to $O\lparen nlog_{2}\lparen n\rparen\rparen$ . However, with a pivot that partitions everything on one side, the time complexity is worse case: $O\lparen n^{2}\rparen$ . Here's a video to simplify of what Quicksort is (🎬[Length:3:05 Second]):

Now look at the image below to solidify your understanding: Figure 10-8 shows the quicksort process’s partitioning steps in great detail. The following code shows an implementation of the quicksort algorithm:

Example:

function quickSort(items) {
return quickSortHelper(items, 0, items.length - 1);
}

function quickSortHelper(items, left, right) {
var index;
if (items.length > 1) {
index = partition(items, left, right);

if (left < index - 1) {

quickSortHelper(items, left, index - 1);
}

if (index < right) {
quickSortHelper(items, index, right);
}
}
return items;
}

function partition(array, left, right) {
var pivot = array[Math.floor((right + left) / 2)];
while (left <= right) {
while (pivot > array[left]) {
left++;
}
while (pivot < array[right]) {
right--;
}
if (left <= right) {
var temp = array[left];
array[left] = array[right];
array[right] = temp;
left++;
right--;
}
}
return left;
}

quickSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2]


Execution:

   function quickSort(items) { return quickSortHelper(items, 0, items.length - 1); } function quickSortHelper(items, left, right) { var index; if (items.length > 1) { index = partition(items, left, right); if (left < index - 1) { quickSortHelper(items, left, index - 1); } if (index < right) { quickSortHelper(items, index, right); } } return items; } function partition(array, left, right) { var pivot = array[Math.floor((right + left) / 2)]; while (left <= right) { while (pivot > array[left]) { left++; } while (pivot < array[right]) { right--; } if (left <= right) { var temp = array[left]; array[left] = array[right]; array[right] = temp; left++; right--; } } return left; } quickSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 2] 

Time Complexity: $O\lparen nlog_{2}\lparen n\rparen\rparen$ on average, and sometimes $O\lparen n^{2}\rparen$ for worst case
Space Complexity: $O\lparen nlog_{2}\lparen n\rparen\rparen$

One downside about a quicksort algorithm is that it could potentially be $O\lparen n^{2}\rparen$ if a bad pivot is always picked. A bad pivot is one that it does not partition the array evenly. The ideal pivot is the median element of the array. In addition, a quicksort algorithm takes a bigger space complexity of $O\lparen nlog_{2}\lparen n\rparen\rparen$ .

Note (📝): Use a quicksort algorithm when the average performance should be optimal. This has to do with the fact that quicksort works better for the RAM cache! (💡)

### Quickselect

Quickselect is a selection algorithm to find the smallest element in an unordered list. Quickselect uses the same approach as a quicksort algorithm. A pivot is chosen, and the array is partitioned. Instead of recursing both sides like quicksort, it recurses only the side for the element.

This reduces the complexity from $O\lparen nlog_{2}\lparen n\rparen\rparen$ to $O\lparen n\rparen$ . Here's a video to simplify of what Quickselect is (🎬[Length:4:31 Second]):

Quickselect is implemented in the following code:

Example:

var array = [1, 3, 3, -2, 3, 14, 7, 8, 1, 2, 2];
// sorted form: [-2, 1, 1, 2, 2, 3, 3, 3, 7, 8, 14]
function quickSelectInPlace(A, l, h, k) {
var p = partition(A, l, h);
if (p == (k - 1)) {
return A[p];
} else if (p > (k - 1)) {
return quickSelectInPlace(A, l, p - 1, k);
} else {
return quickSelectInPlace(A, p + 1, h, k);
}
}

function partition(array, left, right) {
var pivot = array[Math.floor((right + left) / 2)];
while (left <= right) {
while (pivot > array[left]) {
left++;
}
while (pivot < array[right]) {
right--;
}
if (left <= right) {
var temp = array[left];
array[left] = array[right];
array[right] = temp;
left++;
right--;
}
}
return left;
}

quickSelectInPlace(array, 0, array.length - 1, 5); // 2
// 2 - because it's the fifth smallest element
quickSelectInPlace(array, 0, array.length - 1, 10); // 7
// 7 - because it's the tenth smallest element


Execution:

   var array = [1, 3, 3, -2, 3, 14, 7, 8, 1, 2, 2]; // sorted form: [-2, 1, 1, 2, 2, 3, 3, 3, 7, 8, 14] function quickSelectInPlace(A, l, h, k) { var p = partition(A, l, h); if (p == (k - 1)) { return A[p]; } else if (p > (k - 1)) { return quickSelectInPlace(A, l, p - 1, k); } else { return quickSelectInPlace(A, p + 1, h, k); } } function partition(array, left, right) { var pivot = array[Math.floor((right + left) / 2)]; while (left <= right) { while (pivot > array[left]) { left++; } while (pivot < array[right]) { right--; } if (left <= right) { var temp = array[left]; array[left] = array[right]; array[right] = temp; left++; right--; } } return left; } console.log(quickSelectInPlace(array, 0, array.length - 1, 5)); // 2 // 2 - because it's the fifth smallest element console.log(quickSelectInPlace(array, 0, array.length - 1, 10)); // 7 // 7 - because it's the tenth smallest element 

Time Complexity: $O\lparen n\rparen$

### Merge Sort

Mergesort works by dividing the array into subarrays until each array has one element. Then, each subarray is merged in a sorted order. Here's a video to simplify of what Merge Sort is (🎬[Length:1:39 Second]):

Now look at the image below to solidify your understanding: The merge function should add all the elements in sorted order. To do this, you can study this code, this the merge function, the merging function works by taking the two arrays (left and right) and merging them into one array. The elements need to be compared as they get merged to order.

Note (📝): I don't to explain this conceptually as this can be very hard for some to understand, instead, this can be understood in practice, try to look at the code and study them:

Example:

function merge(leftA, rightA) {
var results = [], leftIndex = 0, rightIndex = 0;

while (leftIndex < leftA.length && rightIndex < rightA.length) {
if (leftA[leftIndex] < rightA[rightIndex]) {
results.push(leftA[leftIndex++]);
} else {
results.push(rightA[rightIndex++]);
}
}
var leftRemains = leftA.slice(leftIndex),
rightRemains = rightA.slice(rightIndex);

// add remaining to resultant array
return results.concat(leftRemains).concat(rightRemains);
}



Now, the mergeSort function has to partition the array into two separate arrays and recursively call merge function. Look at the example code below:

Example:

function mergeSort(array) {

if (array.length < 2) {
return array; // Base case: array is now sorted since it's just 1 element
}

var midpoint = Math.floor((array.length) / 2),
leftArray = array.slice(0, midpoint),
rightArray = array.slice(midpoint);

return merge(mergeSort(leftArray), mergeSort(rightArray));
}
mergeSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 23]


Execution:

   // Add the merge function here // Code here... function mergeSort(array) { if (array.length < 2) { return array; // Base case: array is now sorted since it's just 1 element } var midpoint = Math.floor((array.length) / 2), leftArray = array.slice(0, midpoint), rightArray = array.slice(midpoint); return merge(mergeSort(leftArray), mergeSort(rightArray)); } mergeSort([6, 1, 23, 4, 2, 3]); // [1, 2, 3, 4, 6, 23] 

Time Complexity: $O\lparen nlog_{2}\lparen n\rparen\rparen$
Space Complexity: $O\lparen n\rparen$

Mergesort has a large space complexity of $O\lparen n\rparen$ because of the need to create $n$ number of arrays to be merged later. Use mergesort when a stable sort is needed. A stable sort is one that’s guaranteed not to reorder elements with identical keys. A disadvantage of mergesort is that it uses $O\lparen n\rparen$ in space. (😲)

### Count Sort

Count sort can be done in $O\lparen k+n\rparen$ . Instead of sorting by swapping elements, this works by counting occurrences(or events) of each element in the array. Once occurrences of each element are counted, the new array can be created using those occurrences. Here's a video to simplify of what Count sort is (🎬[Length:2:19 Second]):

Now look at the image below to solidify your understanding: Here’s an implementation using a JavaScript object:

Example:

function countSort(array) {
var hash = {}, countArr = [];
for (var i = 0; i < array.length; i++) {
if (!hash[array[i]]) {
hash[array[i]] = 1;
} else {
hash[array[i]]++;
}
}

for (var key in hash) {
// for any number of _ element, add it to array
for (var i = 0; i < hash[key]; i++) {
countArr.push(parseInt(key));
}
}

return countArr;
}
countSort([6, 1, 23, 2, 3, 2, 1, 2, 2, 3, 3, 1, 123, 123, 4, 2, 3]); // [1, 2, 3, 4, 6, 2]


Execution:

   function countSort(array) { var hash = {}, countArr = []; for (var i = 0; i < array.length; i++) { if (!hash[array[i]]) { hash[array[i]] = 1; } else { hash[array[i]]++; } } for (var key in hash) { // for any number of _ element, add it to array for (var i = 0; i < hash[key]; i++) { countArr.push(parseInt(key)); } } return countArr; } console.log(countSort([6, 1, 23, 2, 3, 2, 1, 2, 2, 3, 3, 1, 123, 123, 4, 2, 3])); // [1, 2, 3, 4, 6, 2] 

Time Complexity: $O\lparen k+n\rparen$
Space Complexity: $O\lparen k\rparen$

Note (📝):Use count sort when you’re sorting integers with a limited range. This will be the fastest sort for this case.

### JavaScript's Built-In Sort

JavaScript has a built-in sort() method for an array which sorts elements by ascending order. However, the function sorts alphabetically, so it will not work for numbers. (😔) Consider the code below:

Example:

var array = [12, 3, 4, 2, 1, 34, 23];
array.sort(); // Prints '[1, 12, 2, 23, 3, 34, 4]'


Excution:

   var array = [12, 3, 4, 2, 1, 34, 23]; array.sort(); // Prints '[1, 12, 2, 23, 3, 34, 4]' 

Notice that numbers starting with 1 came first ([1, 12, 2, 23, 3, 34, 4]), then numbers starting with 2 ([1, 12, 2, 23, 3, 34, 4]), then numbers starting with 3 (([1, 12, 2, 23, 3, 34, 4])), and so forth. This is because no comparator(See on Oxford Dictionary) function was passed and JavaScript converted the elements into a string and sorted it according to the alphabet. To sort numbers correctly, use this:

Example:

var array = [12, 3, 4, 2, 1, 34, 23];

function comparatorNumber(a, b) {
return a - b;
}

array.sort(comparatorNumber);
// array: [1, 2, 3, 4, 12, 23, 34]


Execution:

   var array = [12, 3, 4, 2, 1, 34, 23]; function comparatorNumber(a, b) { return a - b; } array.sort(comparatorNumber); // array: [1, 2, 3, 4, 12, 23, 34] 

a-b indicates that it should be from smallest to biggest (ascending). If you want a descending order, it can be done as follows:

Example:

var array = [12, 3, 4, 2, 1, 34, 23];

function comparatorNumber(a, b) {
return b - a;
}

array.sort(comparatorNumber);
// array: [1, 2, 3, 4, 12, 23, 34]


Execution:

   var array = [12, 3, 4, 2, 1, 34, 23]; function comparatorNumber(a, b) { return b - a; } array.sort(comparatorNumber); // array: [1, 2, 3, 4, 12, 23, 34] 

Note (📝): The sort() function can be useful when you need a quick way to sort something

# Summary (📚) There are two ways to search inside an array: linear search and binary search. Binary search is faster with $O\lparen log_{2}\lparen n\rparen\rparen$ time complexity, while linear search has $O\lparen n\rparen$ time complexity. However, the binary search can be performed only on a sorted array.

Table 10-1 below summarizes time and space complexities of different sorting algorithms. The most efficient sorting algorithms are: quicksort, mergesort, and count sort. Count sort, while the fastest, is limited to when the range of array’s values are known.

# Challenge (😤🔥👊) These exercises on search and sorting algorithms cover varying problems to help solidify your knowledge you gained from this part. Don't forget to post your answer in the comment(with explanation too): (😃)

## USE THE IMPLEMENT SQUARE ROOT FUNCTION FOR AN INTEGER WITHOUT USING ANY MATH LIBRARIES

Problem: The first solution that may come to mind is trying every possibility from 1 to the number, as
follows. See the example output:

Example:

sqrtIntNaive(9); // Must print '3'


You must code an algorithm base from this given time complexity(can you? (😎)):

Note (📝): This means you must show two solution from this problem because there's two time complexity we need to show

Time Complexity: $O(n)$
Time Complexity: $O(logn_{2}(n))$

## FIND IF TWO ELEMENTS OF AN ARRAY ADD UP TO A GIVEN NUMBER

Problem: The simple approach to this problem is to check every other element for each element in the array. See the example output:

Example:

findTwoSum([1, 2, 3, 4, 5], 6); // Must print 'true' because 1 + 5 is 6


You must code an algorithm base from this given time and space complexity(can you? (😎)):

Time Complexity: $O(n)$
Space Complexity: $O(n)$

# Up Next👉 Part 11: Hash Tables (🔥🔥) (July 31-August 1, 2020) Posted on by: ### Edison Pebojot(👨‍💻)

I started using computers and writing software around 2008 and 2009, I taught myself Programming. However, I wasn't a typical "nerd-klutz". 😅

### Discussion [deleted]

LOL. Thanks! (😆)  