DEV Community

Cover image for Algorithm and Data structures for non-computer scientists programmers
HABINEZA Janvier
HABINEZA Janvier

Posted on

Algorithm and Data structures for non-computer scientists programmers

Part one: Understanding Big O notation

Simply, Big O notation is used to measure the worst-case scenario of an algorithm, take an example of these two functions,

Function one

const printNumbers=n=>{
    for(let i =0; i <n; n++) console.log(i)
}
Enter fullscreen mode Exit fullscreen mode

Function two

const getElement = arr =>{
    const value = arr[3];
    console.log(value);
}
Enter fullscreen mode Exit fullscreen mode

Those two javascript functions, both print elements to the console, but analyzing the time it will take to print the n-1 number (which is the worst case at the time) and the time it will take the second one is different considering that they are running on the same hardware, OS and CPUs.
The time required to execute the second function will not change with respect to the input hence is called O(1), which is different from the first one as the time it will take to execute printNumbers(10) is not the same as printNumbers(1000000), for the fact that the run time complexity is directly proportional to the number of input (n), it is called O(n).

In solving a given problem, everyone has the right to choose an algorithm to solve that particular problem but if one runs in a minute and the other runs in milliseconds yet there are using the same hardware, OS and CPU, here is where efficiency comes in.
To compare the efficiency of two algorithms we need two things:

  1. Time complexity of an algorithm: specifies how long it may take an algorithm to execute each set of instructions.

  2. The space complexity of an algorithm: the memory consumed by the algorithm.

Note that a good algorithm is one that executes in less time and consumes low memory during the process.

Let’s stop beating bushes and get started, what is Big O, and what is its importance?

By definition;

big O from Wikipedia is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. So as we said in the beginning it is used to measure the performance of a given algorithm.

Let us try to compare some scenarios to see how this can be implemented in software engineering, in this course we will be using javascript as a programming language but in some scenarios, we may use pseudocodes.

1. O(1): Constant time

This is an algorithm that runs in constant time regardless of the input, which means this will run with the same time even on different machines if they are running on the same CPU.

const printNumber = (array)=>{
    for(let i=0; i<array.length; i++){
         console.log(array[0])
     }
}
Enter fullscreen mode Exit fullscreen mode

The time taken by this function to run is independent of the length of the array (argument), and we call it O(1).

2. O(n): Linear time

Let us try to revamp our first function so that it can print all elements in the array.

const printElement = (array)=>{
    for(element of array) console.log(element);
}
Enter fullscreen mode Exit fullscreen mode

So, what do you think when the input (I mean the array we are looping on) grows, For this function, it will always print all number till the last one (which is our worst case) so if the array contains 10 elements, our function will take much time than when our array only has 2 elements but not more than as when our element has 100000 elements. So our algorithm will scale down as our input grows toward infinity and we call O(n) as the time taken grows in linear function with input size.

3. O(n^2): Quadratic time

Think of a nested loop, the first loop will run n times for the worst case, and the second will run n times for each iteration of the first loop which will result in a total of n^2 iterations. Let us take an example of a function that checks duplicates in an array and return the number of found duplicates.

const numberOfDuplicates = (array)=>{
    let duplicates = 0;
    const len = array.length;
    for(let i=0; i<len; i++){
        for(let j=0; j<len; i++){
            if(array[i] === array[j]){
                duplicates++
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

4. O(logn): Logarithmic time

This algorithm scales down as the number of input grows as linear one but it does not depend on the size of input but the half of the size, and for that fact it is obvious that it is more efficient than quadratic one, an example of such algorithm we can say binary search algorithm

const binarySearch = (array, item) =>{
    let startingPoint = 0;
    let endPoint = array.length-1
    while(startingPoint <=endPoint){
    // Set the pivot;
    let pivot = Math.floor((startingPoint+endPoint)/2);
        if(array[pivot] === item) return pivot;
        if(array[pivot]>item){
            endPoint = pivot -1;
        }else{
            startingPoint = pivot+1
        }
      }
};
Enter fullscreen mode Exit fullscreen mode

For this example, for every iteration; the function will run on half of the size of the previous inputs rather than running on full size;

Rules of Big-O Notation

Coefficient Rule: "Get rid of all constants"
It states that you have to ignore any constant not related to the size of input, so all those coefficients are negligible.

const printAllElements = (arr) => {
   console.log(arr[0]) // O(1)
   for(let i=0; i<arr.length; i++){
     console.log(arr[i]); // O(n)
   }
}
Enter fullscreen mode Exit fullscreen mode

For this function we have O(1) + O(n) so it is O(n+1), 1 is independent to the input size, hence as n approaches the infinity that coefficient (1) will be negligible, then our algorithm is still O(n).

Sum Rule: "Add Big-Os Up"
As we can get it from sum meaning, we have to add all time complexity to get the overall time complexity of the master algorithm.

const printAllElements = (arr) => {
   for(let i=0; i<arr.length; i++){
     console.log(arr[i]); // O(n)
   }
   for(let i=0; i<arr.length; i++){
     console.log(arr[i]); // O(n)
   }
}
Enter fullscreen mode Exit fullscreen mode

For this algorithm we will need to add both O(n)s then we finally get O(2n), applying the coefficient rule, the resultant time complexity will be O(n).

const printAllElements = (arr1, arr2) => {
   for(let i=0; i<arr1.length; i++){
     console.log(arr1[i]); // O(n)
   }
   for(let i=0; i<arr2.length; i++){
     console.log(arr2[i]); // O(m)
   }
}
Enter fullscreen mode Exit fullscreen mode

For this algorithm, since both O(n), and O(m) run linearly with time, we will still consider this algorithm as O(n).

Product Rule: "Multiply Big-Os"
This rule explains how we can multiply Big-Os,

const printAllElements = (arr) => {
   for(let i=0; i<arr.length; i++){
     console.log(arr[i]); // O(n)
     for(let i=0; i<arr.length; i++){
       console.log(arr[i]); // O(n)
     }
   }

}
Enter fullscreen mode Exit fullscreen mode

For this example, internal loop runs n times on each iteration of external loop, so the overall run time complexity will be O(n*n) then O(n2).

To sum up, let’s use this graph to compare those types of time complexity.

Image description

From this graph, we can say that algorithm with:

  • Constant time complexity is the excellent one
  • Logarthmic time complexity is good
  • Linear time complexity is a bit fair
  • Quadratic, Cubic, and exponential are the worst

Top comments (0)