DEV Community

Cover image for Algorithm Time Complexity Simplified
Fernanda Kipper
Fernanda Kipper

Posted on • Updated on

Algorithm Time Complexity Simplified

Recently, I have been studying about algorithm complexity, and as I like to share and write about things that I am learning, I have decided to talk about this subject to help others by sharing what I learned so far.

How to measure Algorithm Complexity

First of all, lets talk about what counts when we are performing the time complexity measurement of an algorithm. And to do that, we need to understand what Asymptotic complexity stands for.
But remember, we can't measure the time complexity of a program, just an algorithm, and algorithms always end at some point.

Asymptotic complexity

If we compare the absolute time of executation of two algorithms, we would be comparing subjective values, since they are specific to particular hardware and context. For example, even if we run both algorithms on the same computer each program would have to run without sharing CPU with other programs or operating system service and even the order in which we run the programs can affect the result, due to the thermal characteristics of the CPU or the mere oscillation of the ambient temperature.
So, we need to compare their running time mathematically, here where the term Asymptotic complexity comes in, considering that the time complexity of an algorithm summarizes how the execution time of algorithm grows with the input size.

This growth is calculate based on how many steps (for example, how many times it enters in a for loop) an algorithm need to do ir order the solve a problem given an n number of elements to be counted.

But we cannot considerer just one value of n to measure a complexity of an algorithnm, because as we know, a program can be very efficient for big values but not the best choice for small values and the other way around.

Worst and Best Case

Since its almost impossible to consider all cases, we choose the worst and best scenario of an input to calculate the complexity of certain algorithm.

Worst case runtime means that you are feeding the worst possible input (of that size) into your algorithm. Best case runtime means that you are feeding the best possible input into your algorithm, which is the fastest scenario

Big O Notation

In short way, the big O notation is way to classify how scalable our algorithm is. This helps us determine how our algorithm will behave in the WORST case or how long it will take to complete based on its input values.

To get the final equation, there are some rules we need to follow, first is to drop constant values (such as variable declaration and other expressions that just happen once while your algorithm is running regardless of input size) and non-dominate terms (i will explain in the next section) since this one has minimal effect on the growth curve of time complexity when the input size are very large.
Complexity Curve

Being the horizontal line the growth of time complexity and the vertical line the input size, note that the equations that have exponential degree are the worst ones, because as the number of input size grow, the time complexity is more then duplicated and constants would have almost no effect on the growth of this curve.

Pratice

After all the theoretical explanation, let's dive into it and get our hands dirty.
Let's considerer the following algorithm, which print all possible pairs given a certain array, the order matters and a element cannot be a pair with it self.

void printPossiblePairs(int data[], int len){
  int cont, cont2;
  for(cont = 0; cont < len; cont++){
    for(cont2 = 0; cont2 < len; cont2++){
      if(cont != cont2){
        printf("%d - %d\n", data[cont], data[cont2]);
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This algorithm´s worst and best scenario are equal.

Checking step by step, we need to conclude how many times each step would repeat based on input size that we will call N - in this case the input size consists on the array length. So let's analyze the algorithm shown earlier:

First line, will execute only once

void printPossiblePairs(int data[], int len){
  int cont, cont2; // (1)
Enter fullscreen mode Exit fullscreen mode

This line will execute (n + 1) times, in other words, will execute one time for every element on the array + one (when it checks that the loop is end)

  for(cont = 0; cont < len; cont++){ // N + 1
Enter fullscreen mode Exit fullscreen mode

This line will execute (n + 1)(n + 1) times
Because it will run N + 1 times every time the outermost "for" executes (that is N + 1 as well)

    for(cont2 = 0; cont2 < len; cont2++){ // (N + 1)(N + 1)
Enter fullscreen mode Exit fullscreen mode

Will run just one time on all the times the above for loop runs
Note that if statement has almost no effect in time complexity

     if(cont != cont2){ // (N + 1)(N + 1)(1)
Enter fullscreen mode Exit fullscreen mode

To finish, this line will run only when the indexes cont are different, meaning all the times the for loop runs minus one, cause they will be igual just once

     printf("%d - %d\n", data[cont], data[cont2]); //// (N + 1)(N + 1)(1) - 1
Enter fullscreen mode Exit fullscreen mode

Then, your final equation is (N + 1)(N + 1)(1) - 1
Notice that we reduce
Based on the rules we talked about before, we drop constant values such as (1) and - 1, cause this terms have no effect when we are dealing with big input numbers, so we leave only dominate terms
(N+1)(N+1)
Dropping costants
(N)(N)
(N²)
Our Big O is: O(N²)
When N increases, the running time increases by N * N.

Top comments (0)