# Pivoting To Understand Quicksort [Part 2]

### Vaidehi Joshi ・14 min read

*This is the second installment in a two-part series on Quicksort. If you haven’t read* *Part 1* *of this series, I recommend checking that out first!*

In part 1 of this series, we walked through how the quicksort algorithm works on a high level. In case you need a quick refresher, this algorithm has two important aspects: a pivot element, and two partitions around the pivot.

We’ll remember that quicksort functions by choosing a pivot point (remember, this is only *somewhat* random!), and sorting the remaingin elements so that items smaller than the pivot are to the left, or in front of the pivot, and items that are larger than the pivot are to the right, or behind the pivot. These two halves become the partitions, and the algorithm recursively calls itself upon both of these partitions until the entire list is divided down into single-item lists. Then, it combines them all back together again.

Quicksort is widely-used and researched, and is often synonymous with the the phrase “most efficient algorithm”. However, because there’s just *so much to know* about this algorithm, even though we covered a lot of ground last week, we still had so many unanswered questions!

Today, we’ll get to the bottom of these mysteries and try to understand what makes quicksort so lightning-quick…and why, sometimes, it isn’t!

### Good algorithm gone bad

Something we keep hearing again and again is that the pivot is a *somewhat* arbitrary element in the unsorted list. We know that *what* we choose as the pivot can be slightly random, but we should really be aiming for the “middle-most” element, or close to the ** median** of the entire collection.

One of the reasons that this is important is because we want our two partitioned halvesâ€Š–â€Šthe ones that we’ll call quicksort on recursivelyâ€Š–â€Što be equal in size. Last week, we learned that two unequally-sized partitions can be super problematic. But…we still don’t know why!

Time to find out. The best way to illustrate what why this is a problem is by using an example, of course! In the illustration below, we have a list that’s actually *nearly* sorted. Let’s say that we choose the last element, 16, as our pivot element.

We’ll create two pointer references in order to do our swapping, and compare all of the elements to the pivot.

But guess what? *All* of the elements are smaller than our pivot! So, we’re not really doing any swapping here, are we?

Okay, this is still fine, we’re doing (*n*-1) comparisons here, where *n* is the total number of elements to be sorted.

But what happens the second time around? Well, our pivot is 15. Again, as we start to compare all the elements to it, almost all of them are smaller than the pivot element.

Not only are our partitions lopsided here, but there’s something else bad going on, too! We’re making only *one less* comparison with each pass through the array. That’s not that great, is it? In fact, it’s terrible! If you’re cringing at this and feeling like you’ve seen this beforeâ€Š–â€Šwell, you’re right! The number of comparisons we’re making here is actually rather reminiscent of the bubble sort algorithm, which had a quadratic runtime, or *O(nÂ²)*.

If we continued sorting our list here, we’d eventually execute *nÂ²* comparison operations. That’s not ideal, at allâ€Š–â€Šwhich is exactly why it’s a great example of what happens if we don’t choose a pivot correctly! If we choose a pivot that doesn’t fall close to the median of the collection that we’re trying to sort, we’ll have to compare it to almost all of the elements, since they’ll either all be mostly larger or smaller than the pivot.

There are some cases where we’re likely to fall into the trap of choosing a pivot that’s not near the median of a dataset. In fact, we kind of fell right into this trap just now!

For arrays that are nearly or completely sorted, quicksort operates at its worst. In other words, the average runtime for a sorted or a nearly-sorted list is quicksort’s worst case runtime: O(nÂ²).

This is particularly important to remember if we know that we’ll be dealing with data that’s mostly sorted. In those situations, we want to stay away from quicksort, since it’s going to take a really long time to run.

### Pragmatic sorting

Okay, now that we’ve gone through the in’s and out’s of swapping elements in the two partition halves and choosing a pivot element, let’s look at a quicksort algorithm in action!

While I’ve generally been using Rosetta Code’s algorithms as examples in my posts, for this particular algorithm, I found another implementation that made a lot more sense to me. The algorithm below was written by Nicholas Zakas, who has some really great resources and blog posts on implementing computer science concepts in JavaScript! I really liked his implementation (thank you, Nick!), and I’m using his code below. I’ve tweaked his functions a bit, renaming things, adding my own comments, as well as some console.log’s that’ll come in handy when we run the code in just a minute.

So, let’s get down to business. We’ll start by looking at the external-most function, first. This is the actual quickSort function itself, which takes in three arguments: an array of items to be sorted, and a leftIndex and rightIndex, which are the left and right references to keep track of which elements will (potentially) be swapped.

```
function quickSort(items, leftIndex, rightIndex) {
// Declare an index that will be our pivot reference.
var pivotIndex;
// If the array has only one item, it's already sorted!
// If it has no items, we don't want to try to sort it!
if (items.length > 1) {
// As long as the array has two items, we can parition it.
pivotIndex = partition(items, leftIndex, rightIndex);
console.log('** pivot is: ', items[pivotIndex]);
// If the left reference hasn't been incremented to
// reach the pivot yet, we want to keep comparing it.
if (leftIndex < pivotIndex - 1) {
quickSort(items, leftIndex, pivotIndex - 1);
}
// If the right reference hasn't reached the
// pivotIndex yet, we need to keep comparing it.
if (pivotIndex < rightIndex) {
quickSort(items, pivotIndex, rightIndex);
}
}
return items;
}
```

There are quite a few things going on in this quickSort function, so let’s break it down. First, we declare an index variable that will be our pivot reference. Then, we only actually run this algorithm if the array of items being passed in has more than one item in it, which makes sense because if it’s empty or if there’s only one element, we wouldn’t want to even bother sorting it!

We’ll notice that there are two if conditional statements within the outer if: the logic in both of these is what is responsible for determining whether the rightIndex reference and leftIndex reference need to be compared still, or whether they can be recursively quickSorted into smaller parts.

But what about the work of partitioning? Well, there’s a function for that! We’ll notice that we’re invoking partition on line 9. This is what that function might look like:

```
// The partition() method takes a list of items, and a left
// reference, as well as a right reference. Both left and right
// are indexes to indicate where the pointers should start.
function partition(items, left, right) {
// Find the pivot by adding the two indexes together
// and dividing by two (the middle element, effectively).
var pivot = items[Math.floor((right + left) / 2)];
var l = left;
var r = right;
console.log('** pivot is: ', pivot);
console.log('** left is: ', items[left]);
console.log('** right is: ', items[right]);
// Once the left reference is greater than the right reference,
// we have finished sorting this set of items, and we can return.
while (l <= r) {
// If the left pointer is less than the pivot, increment it.
// In other words, move the pointer to the right.
while (items[l] < pivot) {
l++;
console.log('l is now pointing to: ', items[l]);
}
// If the right pointer is greater than the pivot, decrement it.
// In other words, move the pointer to the left.
while (items[r] > pivot) {
r--;
console.log('r is now pointing to: ', items[r]);
}
// If the left pointer is larger than the pivot, and the right
// pointer is not bigger than the pivot, swap the two elements.
if (l <= r) {
console.log('** now swapping l and r pointers: ', items[l], items[r]);
swap(items, l, r);
// After swapping, increment/decrement the pointers respectively.
l++;
r--;
// console.log('l is now pointing to: ', items[l]);
// console.log('r is now pointing to: ', items[r]);
}
}
// The left item becomes the new pivot element.
return l;
}
```

As the comments explain, the partition function takes in the items that we’re trying to partition, and the left and right indices that represent the start and end of the list of items.

We can see that we have two while loops nested inside of a larger while loop. This is where we do the work of comparing the left and right references to the pivot, which is ultimately what is responsible for determining whether two items need to be swapped.

We can also see that the partition function is calling out to the swap function! We looked at this briefly last week; this is what the swap function does:

```
function swap(items, leftPointerIndex, rightPointerIndex){
// Create a temporary reference for the left item.
var tempReference = items[leftPointerIndex];
// Move left item to the index that contains right item.
// Move right item to the temporary reference.
items[leftPointerIndex] = items[rightPointerIndex];
items[rightPointerIndex] = tempReference;
}
```

Compared to the two larger functions we just saw, the swap function has very little responsibility: it creates a tempReference that it uses in order to swap two elements at two index, that we pass into it as arguments. This is a really nice little bit of logic encapsulation, since it doesn’t have any concept of the quickSort or partition functions at all!

Alright, let’s try and run this code. We’ll try our hand at sorting an array of items that looks like this: [19, 22, 63, 105, 2, 46].

```
var items = [19, 22, 63, 105, 2, 46];
quickSort(items, 0, items.length - 1);
> ** pivot is: 63
> ** left is: 19
> ** right is: 46
> l is now pointing to: 22
> l is now pointing to: 63
> ** now swapping l and r pointers: 63 46
> ** now swapping l and r pointers: 105 2
> ** pivot is: 105
> ** pivot is: 22
> ** left is: 19
> ** right is: 2
> l is now pointing to: 22
> ** now swapping l and r pointers: 22 2
> r is now pointing to: 2
> ** pivot is: 46
> ** pivot is: 19
> ** left is: 19
> ** right is: 2
> ** now swapping l and r pointers: 19 2
> ** pivot is: 19
> ** pivot is: 46
> ** left is: 46
> ** right is: 22
> ** now swapping l and r pointers: 46 22
> ** pivot is: 46
> ** pivot is: 105
> ** left is: 105
> ** right is: 63
> ** now swapping l and r pointers: 105 63
> ** pivot is: 105
>> (6) [2, 19, 22, 46, 63, 105]
```

Awesome! So, what’s going on here? Well, to start, we pass in our array, as well as both its start and end index to the external quickSort algorithm. We can see that this particular implementation of quicksort chooses the element at the middle of this list as its pivot; in our case, that element is 63. Recall that it doesn’t really matter whether the pivot is the first or the last or middle elementâ€Š–â€Šjust as long as we’re not dealing with the worst case sorting for quicksort.

Notice how we keep incrementing the left pointer until we hit a scenario where the element at the left reference is greater than the element located at the right reference. We can see this happening if we watch how the value of left keeps changing. On line 5, left is pointing to 19; on line 7, we see that this reference is incremented to point to 22. It is only when left is pointing to 63 and right is pointing to 46 that the algorithm is actually in a position to swapâ€Š–â€Šwhich is exactly what we do on line 9. Once we understand how that first pass through the array is functioning, it should hopefully be a little bit easier to discern what’s happening in the subsequent passes.

This is, of course, only *one* way to go about implementing quicksort, and I tend to like it because it’s mostly easy to follow what’s going on. We could also write this same algorithm functionally and imperatively. And we could even try our hand at implementing it iteratively, without using recursion directly. But that’s probably a challenge for another day!

### Weighing the cost-benefit of sorting

Now that we’re a bit more well-versed in the implementation(s) of quicksort, there’s one last bit to cover: just how good *is* quicksortâ€Š–â€Šand when would we want to use it?

Let’s start be taking a look at how the quicksort algorithm compares to its peers. We’re already familiar with the time complexity of quicksort. It’s average performance and best-case performance at runtime are the same as merge sort: it runs in **linearithmic** time, or *O(n logn)*. So what other defining characteristics does this algorithm have?

Well, based upon my research, quicksort’s space complexity is a bit of a discussion. In case we need a refresher, space complexity is determined by how much additional memory or space an algorithm needs in order to run. The dilemma around quicksort is that it *does* require a certain amount of extra spaceâ€Š–â€Šthe space for each recursive algorithm to allocate left and right pointer references as it continues to partition and sort the array down to single-element lists.

Here’s the rub, however: even though quicksort needs more than the traditional constant, or *O(1)*, amount of space, it doesn’t need a whole lot more space than that. It requires an extra *O(logÂ²n)* amount of space, which is still pretty small, but technically it’s more than *O(1)* constant space. Some people seem to think that this makes it an out-of-place algorithm, but most of the texts and course material that I’ve read imply that algorithms which require a minimal amount of space, including *O(logÂ²n)*, still count as in-place. So we’ll say that quicksort is an ** in-place** sorting algorithm.

Quicksort has one major difference with merge sort, and this ends up being a defining point between these otherwise super-similar algorithms. This is the fact that quicksort is an ** unstable** algorithm, meaning that it won’t be guaranteed to preserve the order of elements as it reorders; two elements that are exactly the same in the unsorted array could show up in a reversed order in the sorted one! Stability ends up being what causes people to choose merge sort over quicksort, and it’s the one area where merge sort appears as the obvious winner.

Similarly, because quicksort can sort items without creating a new or duplicated list, it doesn’t require external memory; instead, it can store its entire dataset without using external memory, making it an ** internal** sorting algorithm. We already know that quicksort tends to use recursion to all itself from within itself, which means that we can classify it as a

**algorithm as well. And, we also know that it uses comparison operators like < and >, so we also can be sure that it is a**

*recursive***algorithm.**

*comparison*These characteristics are the key to understanding why so many people swore by quicksort long ago, and haven’t looked back since. Based upon this two-part series, we already have to knowledge to determine when quicksort is the right tool for the job!

Quicksort is usually a great choice if we find ourselves meeting these conditions:

- We don’t care about maintaining the order of our items (stability isn’t important to us).
- We need to use an algorithm that can fit all of our data to be sorted into memory, without using any external memory.
- We are sure that we’ll never have to deal with data that’s sortedâ€Š–â€Šor even mostly sorted.

As it turns out, programmers love quicksort so much that they’ve even figured out ways how to optimize this algorithm *even further*!

We can optimize quicksort by running the recursive calls in parallel.

We will anyway have to call quicksort on both the left and right partitions of a dataset. The cool thing is that, because neither of these two partitions have to be compared to one another after the initial division of elements, we can run both recursive calls on both partitions in parallel.

When we run two quicksort calls in parallel, we are effectively spawning two different tasks in order to run them at the same time. We sort each partition of the array independently, using the same amount of space but only 1/2 the time we did before. This is often referred to as ** parallel quicksort**.

Another great optimization is being a little smarter about which pivot to choose. Rather than just choosing the first or the last elementâ€Š–â€Šwhich is a lot of what we’ve been doingâ€Š–â€Šwe could be a little bit more clever with our algorithm.

We could look at the last few elements in the list, and select the median element from amongst them. This gives us a better idea of what types of items are in the list, and then we can do a little bit of math to select the best item as the pivot.

There are many times that even an *optimized* quicksort doesn’t fulfill all of our conditions. So how do programmers deal with this case? Why, they use quicksort conditionally, of course!

Node, for example, uses quicksort under the hood of its Array.sort function; however, for shorter arrays (in node’s case, those with a length less than or equal to 10), it uses insertion sort! You can see the logic they use to determine that deep within v8’s source code.

Quicksort is so widely-loved that, if we start keeping an eye out for it, we’ll start to see it *all around us*! For example, Ruby’s interpreter uses quicksort behind the scenes; if we dig around, we’ll find a method called ruby_qsort that pulls back the curtain behind this abstraction. Many modern-day browsers use an implementation of quicksort in their various versions of sort methods! In fact, the browser that you’re using right now probably has an internal definition of quicksort hidden deep within its sort code. Of course, it’s just up to you to find it.

So, go forthâ€Š–â€Šyou know exactly what to look for now!

### Resources

Quicksort can be written in many different waysâ€Š–â€Šimperatively, functionally, and even iteratively (not recursively!). Each implementation has its benefits and drawbacks, and it’s important to understand the different design decisions being made in the process of writing a quicksort algorithm. If you’d like to try writing your own, or want to see different implementations and what makes each of them efficient, check out the resources below!

- Efficiency of Quick Sort, Udacity
- Sorting algorithms/Quicksort, Rosetta Code
- Computer science in JavaScript: Quicksort, Nicholas C. Zakas
- Parallel Quicksort Algorithms: University of Oslo
- Is quicksort in-place or not, StackOverflow

*This post was originally published on medium.com*