DEV Community

Cover image for You Can’t Improve What You Don’t Understand: Know thy Algorithm’s Run Time Complexity
Nyior Clement Jr.
Nyior Clement Jr.

Posted on • Originally published at nyior-clement.netlify.app

You Can’t Improve What You Don’t Understand: Know thy Algorithm’s Run Time Complexity

In my previous post in this series, I mentioned that in this article I’d explain how the efficiency of an algorithm could be measured.

But why measure the efficiency of algorithms you ask ?

Just as there could be several paths to a destination and we often go with the shortest path, there are usually several algorithms for solving one computational problem. Measuring the efficiency of algorithms helps us identify the shortest path to solving a given computational problem.

For a given task, we measure the efficiency of an algorithm in terms of :

*The space it requires to complete the task. This is formally called its space complexity.
*The time it takes to complete the task. This is formally called its time complexity.

Thus, measuring the efficiency of an algorithm implies deducing the time-space complexity of the algorithm.

Okay so how can I deduce the time-space complexity of my algorithm?

Very easy. Implement the algorithm, run it on a computer and observe how long it takes to execute and how much memory it uses.

Run it on a computer ?

Well spotted! So if we were comparing the performance of different algorithms we’d have to implement and run all of them? I mean why write code for several algorithms and run them only to pick one at the end ? That’s a brute force approach and not deft  I’d say!

What if we could do a priori analysis of the time-space complexity of these algorithms then implement the most efficient ?

Priori analysis ? how can we possibly measure how fast an algorithm will perform or how much space it would take without running it on a computer.

I will tell you how, but before I do let me quickly correct this: you see, testing the time-space complexity of an algorithm by running it on a computer isn’t even cool. Why?  Computers have disparate hardware. In addition, the execution time and space used by our algorithm could also be affected by the number of processes also running at the time of testing, the result obtained could be decidedly misleading.

Oh this actually makes sense. So how do you suggest I do this priori analysis?

 It’s easy! I present to you the Big-O notation.

Big-O notation? Warisdat?

This provides us with a unique way to describe the performance of our algorithms without having to run them on a computer. With Big-O, we deduce how the number of steps an algorithm takes to complete a task will grow as a function of the input size in the worst case scenario. Algorithms always act on some data: in this case the input. So with Big-O, we are basically just trying to figure out the relationship between the growth rate of an algorithm with respect to the input of size, n. Even though Big-O could be used to deduce the space complexity of an algorithm too, for brevity's sake, this article will focus on the time complexity aspect.

Worst case scenario ?

Big-O assumes for example when iterating a list, that the element does not exist in the list. In which case our algorithm will have to loop to the end of the list. With Big-O, we are only concerned with how well an algorithm will perform in the most terrible situation. In which case it will have to do the hardest job.

How can this Big-O be used to measure the run time complexity of an algorithm?

There are existing classes of run time complexities. Usually, one of these is used to describe an algorithm’s runtime complexity. We will cover the most common ones here.

1. Constant time complexity O(1) : An algorithm is said to have a constant  time complexity if the number of steps it takes to complete a task stays the same even with increasing input size.

my_list = [1, 2, 3]

first_element = my_list[0]
Enter fullscreen mode Exit fullscreen mode

In the code snippet above, even when the size of the defined list grows, this algorithm will always take one step to complete its task of retrieving the first item in the list.

2. Linear time O(n) : For algorithms in this class, the number of steps required to complete a task grows in direct proportion to the input size. If input size = n, then the algorithm will take n steps to complete its task. For example:

my_list = [1, 2, 3]

For element in my_list:

    Print(element)
Enter fullscreen mode Exit fullscreen mode

3. Quadratic time O(n*n):  For algorithms in this class, the number of steps required to complete an operation will equal the square of the input size. This is so because it performs a linear time operation on each unit of the input. If input size=x, the algorithm will require x*x steps to complete its operations

arr = [1,2,3]

For x in arr :

    For y in arr :

        print(x, y)
Enter fullscreen mode Exit fullscreen mode

4. Logarithmic time O(LogN):  Algorithms in this class usually take LogN steps to complete their task where N =size of the input and LogN is the number that 2 must be raised to, to give N. For example, if N=16 then Log16=4. As you might have figured out already algorithms in this class are relatively efficient. How do algorithms in this class achieve this efficiency? At each step, they cut down the input size to half of the original size. A classic example of an algorithm that’s in this class is the binary search algorithm.

def binary_search(element, arr):

    """ function that searches a 
        sorted list in a binary fashion.
        where element is the search key
        and arr is the list that's to be 
        searched 
    """

    mid_point = len(arr)//2 #floor division

    while True:

        if element == arr[mid_point]:

            print(f"match found at index: {mid_point}")

            break

        elif element > arr[mid_point] and mid_point>0:

            mid_point = len(arr[mid_point: ])//2 + mid_point

            

        elif elem < arr[mid_elem] and mid_elem>0:

            mid_elem = len(arr[:mid_elem])//2

        else:

            print("element not in list")

            break
Enter fullscreen mode Exit fullscreen mode

5. Exponential time O(2n): ** Algorithms in this class usually take 2 to the power of n steps to complete a task. For algorithms in this class, the number of steps they require to complete a task, doubles with a unit increase in the input size. For example if n = 3 then an algorithm in this class will require 2 to the power of 3 = 8 steps to complete its task. If n increases to 4 then the algorithm will require 2 to the power of 4 = 16 steps. As you can see the number of steps doubles from 8 to 16.

While the most common time complexities have been covered in this article, the  list here is not exhaustive; there are others. Furthermore,  real life software systems are usually made up of complex algorithms that could in fact be described with two or more classes of run time complexities. In situations like that, always describe your algorithm’s performance in terms of the most complex run time complexity present. 

Stay tuned for my next article on linked lists :)

References

www.freecodecamp.org/news/time-is-complex-but-priceless-f0abd015063c/

Latest comments (1)

Collapse
 
nyior profile image
Nyior Clement Jr.

You've got questions, concerns? Let's talk here in the comment section.