DEV Community

Cover image for QuickSort - Time Analysis (Part2)
Bhav Kushwaha
Bhav Kushwaha

Posted on • Edited on

QuickSort - Time Analysis (Part2)

Stating the Fundamentals

1) Suppose we have a simple function to print every item in a list:

def print_items(list):
 for items in list:
  print item
Enter fullscreen mode Exit fullscreen mode

This function goes through every item in the list and prints out.

2) Now consider the same function but with a 1 second sleep() associated with every iteration in it.

from time import sleep
def print_items2(list):
 for items in list:
  sleep(1)
  print item
Enter fullscreen mode Exit fullscreen mode

The Key points:

  • Both functions loop through the list once, so theoriticaly they both have O(n) time complexity.

  • But the print_items() is evidently faster than the latter since it doesn't contains the sleep function and is executed faster.

  • So even though both functions are the same speed in Big O notation like O(n), print_items is practically faster.

From the above example we gather:

  • When we write Big O notation like O(n), it really means this:

bigO

  • c is some fixed amount of time that your algorithm takes, called constant.

For Example, it might be 10 ms for print_items and 1 sec for print_items2.

Example of Binary Search vs Simple Search

Suppose both the algorithms had the following constants:

BSearchvsSSearch

You might assume Simple Search is way faster due to a very low constant value than Binary Search.

But now considering a list of 4 Billion Elements!

4Billion

NOTE : Here the constant c doesn't get considered and Binary Search is proved to be more faster.

QuickSort vs MergeSort Case

In this case the constant associated with the time complexity plays an important role is determining the better performing Sorting Algorithm.

  • Quicksort has a smaller constant than Mergesort!

  • Sof if they're both O(n) time, then quicksort is faster!

  • And Quicksort is faster in practice too, since it hits the average case way more often than the worst case.

Different Cases of Quicksort

Worst Case

  • The performance of quicksort heavily depends on the pivot you choose.

  • Suppose you always choose the first element as the pivot, and you call a quicksort on an array that is already sorted.

WorstCase

  • Here, quicksort doesn't check whether the array is sorted or not, it will try to sort it again.

  • Stack Size in Worst Case: O(n)

Best Case

  • Now instrad, suppose you always picked the middle element as the pivot. Look at the call stack now:

middleElement

  • It's so short! Because you divided the array in half everytime, you don't need to make as many recursive calls. You hit the base case sooner and the call stack is much shorter.

  • Stack Size in Best Case: O(logn)

Average Case

  • Now take any element as the pivot and the rest of elements are divided in sub-arrays. You will get the result exactly same as the best case.

  • Stack Size in Average Case: O(logn)

Something about the Call Stack at every level

  • The first operation takes O(n) time since you touched all 8 elements on this level of stack. But actually you touch O(n) elements on every level of the call stack.

stacksize

  • Even if you partition the array differently, you still are touching O(n) elements everytime.

O(n)

So each level takes O(n) time to complete.

Conclusion

  • We conclude with the above findings that the Height of Call Stack is O(logn) and each level takes O(n) time.

Conclusion

  • The entire Quicksort Algorithm will take: O(n) * O(nlogn) = O(nlogn)

The Time Complexity of the Best-Case & Average-Case of Quicksort.

  • In the Worst-Case, there are O(n) levels, so the algorithm takes O(n) * O(n) = O(n^2)

NOTE: If you always choose a random element in the array as the pivot, quicksort will complete in O(nlogn) time on average.

  • Quicksort is one of the fastest sorting algorithms out there!

Top comments (0)