DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Cover image for Algorithm Time Complexity Simplified
Fernanda Kipper
Fernanda Kipper

Posted on

Algorithm Time Complexity Simplified

Recently, I have been studying about algorithm complexity, and as I like to share and write about things that I am learning, I have decided to talk about this subject to help others by sharing what I learned so far.

How to measure Algorithm Complexity

First of all, lets talk about what counts when we are performing the time complexity measurement of an algorithm. And to do that, we need to understand what Asymptotic complexity stands for.
But remember, we can't measure the time complexity of a program, just an algorithm, and algorithms always end at some point.

Asymptotic complexity

If we compare the absolute time of executation of two algorithms or programmer who write the code and etc, we would be comparing subjective values, since they are specific to particular hardware and particular people so on. So, we need to compare in a mathematical way, here where the term Asymptotic complexity comes in, considering that the time complexity of an algorithm summarizes how the execution time of algorithm grows with the input size.

This growth is calculate based on how many steps (for example, how many times it enters in a for loop) an algorithm need to do ir order the solve a problem given an n number of elements to be counted.

But we cannot considerer just one value of n to measure a complexity of an algorithnm, because as we know, a program can be very efficient for big values but not the best choice for small values and the other way around.

Worst and Best Case

Since its almost impossible to consider all cases, we choose the worst and best scenario of an input to calculate the complexity of certain algorithm.

Worst case runtime means that you are feeding the worst possible input (of that size) into your algorithm. Best case runtime means that you are feeding the best possible input into your algorithm, which is the fastest scenario

Big O Notation

In short way, the big O notation is way to classify how scalable our algorithm is. This helps us determine how our algorithm will behave in the WORST case or how long it will take to complete based on its input values.

To get the final equation, there are some rules we need to followm as drop constant values (such as variable declaration and other expressions that just happen once while your algorithm is running regardless of input size) and non-dominate terms (i will explain in the next section) since this one has minimal effect on the growth curve of time complexity when the input size are very large.
Complexity Curve

Being the horizontal line the growth of time complexity and the vertical line the input size, note that the equations that have exponential degree are the worst ones, because as the number of input size grow, the time complexity is more then duplicated and constants would have almost no effect on the growth of this curve.

Pratice

After all the theoretical explanation, let's dive into it and get our hands dirty.
Let's considerer the following algorithm, which print all possible pairs given a certain array, the order matters and a element cannot be a pair with it self.

void printPossiblePairs(int data[], int len){
  int cont, cont2;
  for(cont = 0; cont < len; cont++){
    for(cont2 = 0; cont2 < len; cont2++){
      if(cont != cont2){
        printf("%d - %d\n", data[cont], data[cont2]);
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This algorithmΒ΄s WORST and BEST case are equal
Checking step by step, we need to conclude how many times each step would repeat based on input size that we will call N - in this case the input size consists on the array length.

void printPossiblePairs(int data[], int len){
  int cont, cont2; // only once - (1)
Enter fullscreen mode Exit fullscreen mode
  for(cont = 0; cont < len; cont++){ // N + 1
// the size of the array + one (when it checks that the loop is end)
Enter fullscreen mode Exit fullscreen mode
    for(cont2 = 0; cont2 < len; cont2++){ // (N + 1)(N + 1)
// (the size of the array + one) * (the size of the array + one)
//(because it will run N + 1 times all times that the other for runs (that is N + 1 as well)  
Enter fullscreen mode Exit fullscreen mode
     if(cont != cont2){ // (N + 1)(N + 1)(1)
// will run just one time on all the times the above for loop runs
Enter fullscreen mode Exit fullscreen mode
     printf("%d - %d\n", data[cont], data[cont2]); //// (N + 1)(N + 1)(1) - N
// will run only when the indexes cont are different, 
// meaning all the times the for loop runs - number of elements 
Enter fullscreen mode Exit fullscreen mode

Then, your final equation is (N + 1)(N + 1)(1) - (N)
Based on the rules we talked about before, we drop constant values such as 1 and non-dominate terms such as - N, leaving only dominate terms
Note that if statement has almost no effect in time complexity
(N)(N)
(NΒ²)
Our Big O is: O(NΒ²)
When N increases, the running time increases by N * N.

Top comments (0)

🌚 Life is too short to browse without dark mode