DEV Community

Cover image for Sorting the Data: A Whirl Wind Tour of Algorithms πŸ”πŸ’»πŸ”ΌπŸ”½πŸ•°οΈ
Jyoti Prakash Sethy
Jyoti Prakash Sethy

Posted on

Sorting the Data: A Whirl Wind Tour of Algorithms πŸ”πŸ’»πŸ”ΌπŸ”½πŸ•°οΈ

Once upon a time, there was a young programmer named Kunal πŸ’» who was tasked with organizing a large amount of data for a client πŸ’Ό. He had heard that sorting was an essential aspect in many computer science applications πŸ’»πŸ”, but he wasn't sure where to start πŸ€”. He reached out to his mentor, Priyansh πŸ§‘β€πŸ’Ό, who was a seasoned programmer and an expert in sorting algorithms 🧠.

Priyansh πŸ§‘β€πŸ’Ό took Kunal πŸ’» under his wing and began to teach him about the different sorting techniques available πŸ“š. He explained that sorting algorithms arrange a set of elements in a particular order, either ascending πŸ”Ό or descending πŸ”½, based on certain criteria πŸ“Š. There are various sorting algorithms available, each with its own advantages and disadvantages in terms of time πŸ•°οΈ and space complexity πŸš€, stability 🀞, and ease of implementation πŸ’».

Together, Priyansh πŸ§‘β€πŸ’Ό and Kunal πŸ’» went through each sorting algorithm, including Bubble sort 🧼, Insertion sort πŸ’Ό, Selection sort πŸ”, Quick sort πŸ’¨, Merge sort πŸ”„, Heap sort 🧊, Shell sort 🐚, Counting sort πŸ”’, Radix sort πŸ’‘, and Bucket sort 🧺. They discussed the time and space complexity of each algorithm and when it was appropriate to use each one πŸ“Š. They also provided examples and code samples in C++ πŸ’» to help illustrate the concepts πŸ’‘.

By the end of their lessons, Kunal πŸ’» had a solid understanding of the different sorting techniques and was confident in his ability to implement them in his work πŸ’ͺ. He was grateful to Priyansh πŸ§‘β€πŸ’Ό for his guidance and expertise πŸ™, and he went on to use his newfound knowledge to solve many complex data-related problems for his clients πŸ’Ό.

Just like Kunal πŸ’», this blog post is here to help you understand the different sorting techniques and their implementation πŸ’‘. Whether you are a beginner πŸ”œ or an experienced programmer πŸ’»πŸ’Ό, join us as we delve into the world of sorting algorithms πŸ”.

Bubble Sort: The OG of Sorting Algorithms πŸ§ΌπŸ”

Welcome back data enthusiasts, we're diving into the first sorting algorithm on our list: Bubble sort πŸ’‘. As the name suggests, this algorithm is like a group of bubbles rising to the top 🧼. It's a simple and straightforward technique that's great for small data sets, but not so much for large ones πŸ’ΌπŸ’».

Bubble sort works by repeatedly swapping adjacent elements if they are in the wrong order πŸ”€. This continues until the list is sorted in the desired order πŸ”’. The idea is simple, but the execution can be time-consuming, making it less efficient for larger data sets πŸ•°οΈ.

Time Complexity: O(n^2) πŸ•°οΈ

Space Complexity: O(1) πŸ’Ύ

Bubble sort is best used for educational purposes, as it's a great way to understand the basic principles of sorting algorithms πŸ’‘. It's also useful in certain niche use cases, such as when the data set is already partially sorted or nearly sorted πŸ’Ό.

Now, let's see how bubble sort works with a simple example:

#include <iostream>
using namespace std;

void bubbleSort(int arr[], int n)
{
    int i, j;
    for (i = 0; i < n - 1; i++)
    {
        for (j = 0; j < n - i - 1; j++)
        {
            if (arr[j] > arr[j + 1])
            {
                int temp = arr[j];
                arr[j] = arr[j + 1];
                arr[j + 1] = temp;
            }
        }
    }
}

int main()
{
    int arr[] = {64, 34, 25, 12, 22, 11, 90};
    int n = sizeof(arr) / sizeof(arr[0]);
    bubbleSort(arr, n);
    cout << "Sorted array is: \n";
    for (int i = 0; i < n; i++)
        cout << arr[i] << " ";
    return 0;
}

Enter fullscreen mode Exit fullscreen mode

Let's Get Bubbly with Bubble Sort πŸ§ΌπŸ’ƒ

We've got a bubbly bunch of code here, folks! 🧼 This is the Bubble Sort algorithm in all its glory. And just like a group of bubbles, the goal of this algorithm is to sort the elements in an array so they float up to the top in the desired order πŸ”’.

The code starts by defining a function bubbleSort that takes in an integer array arr and its size n. Then, we have two for loops that are going to do the heavy lifting. The first loop starts at 0 and runs until n-1 iterations, and the second loop starts at 0 and runs until n-i-1 iterations.

The inner loop goes through each element in the array and compares it to the next element. If the current element is greater than the next element, the algorithm swaps them using a temporary variable temp πŸ”€. This continues for each iteration of the inner loop, until the array is sorted in ascending order πŸ”Ό.

Finally, we have the main function which creates an integer array arr with 7 elements and calculates its size n. The bubbleSort function is then called, passing in arr and n as parameters. The sorted array is displayed using a for loop and cout statement πŸ’».

Now, let's imagine you have a bunch of random numbers and want to sort them in ascending order. You could use bubble sort and watch as the larger numbers rise to the top just like bubbles in a glass of champagne 🍾.

And there you have it, a simple implementation of Bubble sort πŸ’‘. So next time you're working with small data sets, remember to call upon the OG of sorting algorithms 🧼.

Selection Sort: Choosing the Right One πŸ”πŸ†

Alright, data enthusiasts! We're moving on to the next sorting algorithm on our list: Selection sort πŸ’‘. As the name suggests, this algorithm involves selecting the right element to be in its correct position πŸ”. It works by dividing the list into two parts: a sorted part and an unsorted part πŸ’Ό. The algorithm then selects the minimum element from the unsorted part and adds it to the sorted part, repeating the process until the entire list is sorted πŸ”’.

Time Complexity: O(n^2) πŸ•°οΈ

Space Complexity: O(1) πŸ’Ύ

Selection sort is a great choice when memory is limited and you can't afford to use extra memory πŸ’ΌπŸ’». It's also useful when the data set is small and you don't need an efficient algorithm πŸ’‘.

Let's see how selection sort works with a simple example:

#include <iostream>
using namespace std;

void selectionSort(int arr[], int n) {
   int i, j, minIndex;
   for (i = 0; i < n-1; i++) {
      minIndex = i;
      for (j = i+1; j < n; j++) {
         if (arr[j] < arr[minIndex])
            minIndex = j;
      }
      int temp = arr[minIndex];
      arr[minIndex] = arr[i];
      arr[i] = temp;
   }
}

int main() {
   int arr[] = {64, 25, 12, 22, 11};
   int n = sizeof(arr)/sizeof(arr[0]);
   selectionSort(arr, n);
   cout << "Sorted array is: \n";
   for (int i=0; i < n; i++)
      cout << arr[i] << " ";
   return 0;
}

Enter fullscreen mode Exit fullscreen mode

Let's dive into the code sample of Selection sort πŸ’»πŸ’‘.

The first thing we do is to include the iostream library and declare using namespace std; so we can use cout and endl in our code later on πŸ”Œ.

Next up, we have the selectionSort function which takes in two arguments: an integer array arr and an integer n which represents the size of the array πŸ“Š.

In the function, we start by initializing three variables: i, j, and minIndex πŸ’». i will be used to iterate through the entire list and j will be used to compare elements to find the minimum element in the unsorted part of the list πŸ”. minIndex keeps track of the index of the minimum element found in each iteration of the inner loop.

The outer loop starts at i = 0 and goes until i is equal to n-1, which means we've gone through the entire list once πŸ”’. In each iteration, minIndex is set to i, which represents the starting point for the inner loop πŸ’».

The inner loop starts at j = i + 1 and goes until j is equal to n. In each iteration, it compares the current element arr[j] with the current minimum element arr[minIndex] πŸ”. If the current element is smaller than the current minimum, minIndex is updated to the index of the current element πŸ’».

Once the inner loop finishes, we use a simple swap operation to put the minimum element in its correct position by swapping it with the current element at arr[i] πŸ”„.

Finally, the outer loop repeats until i is equal to n-1, at which point the list is fully sorted πŸ”’.

In the main function, we declare an array arr with the values {64, 25, 12, 22, 11} and find the size of the array with n = sizeof(arr)/sizeof(arr[0]). We then call the selectionSort function and pass in arr and n as arguments πŸ’».

After the sorting is done, we use a for loop to print out the sorted array πŸ”’. And just like that, we have a fully sorted list thanks to Selection sort! πŸŽ‰

So there you have it, a simple but powerful explanation of the code sample for Selection sort πŸ’»πŸ’‘. Who knew sorting could be this much fun? πŸ˜ƒ So the next time you need to choose the right element for its correct position, remember Selection sort πŸ”.

Next up on our sorting adventure is Quick sort! πŸ”œ

Quick Sort: Sorting in a Flash πŸ”₯πŸ’¨

Alright data enthusiasts, we're getting to one of the most popular sorting algorithms: Quick sort! πŸ’₯ It's a fast and efficient algorithm that works by dividing the list into smaller parts and then sorting them independently πŸ’». It's based on the divide and conquer approach, where the list is divided into two parts and then sorted recursively πŸ’‘.

The Quick sort algorithm works by selecting a pivot element from the list. The pivot element is used to divide the list into two parts: the smaller elements to the left of the pivot and the larger elements to the right of the pivot. Then, we sort the two parts independently using the Quick sort algorithm. The process is repeated until all elements in the list are sorted in ascending order πŸ”Ό.

Time Complexity : O(nlogn) πŸ•°οΈ

Quick sort has a time complexity of O(nlogn) πŸ•°οΈ, which is considered to be one of the best time complexities for sorting algorithms. The time complexity of Quick sort is dependent on the pivot element that is selected. If the pivot element is always selected to be the median or close to the median, then the time complexity will be close to O(nlogn). But, if the pivot element is always selected to be the smallest or largest element, then the time complexity will be closer to O(n^2).

Space Complexity: O(logn) πŸ’Ύ

The space complexity of Quick sort is dependent on the number of recursive calls that are made. Each recursive call requires O(logn) space, so the total space complexity is O(logn).

Quick sort is a great choice when you have a large data set and you want to sort it efficiently πŸ’»πŸ”. It's also great when you want to sort the data in-place and minimize the use of memory πŸ’ΌπŸ’».

Let's see how Quick sort works with a simple example:

#include <iostream>
using namespace std;

int partition(int arr[], int low, int high) {
   int pivot = arr[high];
   int i = (low - 1);

   for (int j = low; j <= high- 1; j++) {
      if (arr[j] <= pivot) {
         i++;
         int temp = arr[i];
         arr[i] = arr[j];
         arr[j] = temp;
      }
   }
   int temp = arr[i + 1];
   arr[i + 1] = arr[high];
   arr[high] = temp;
   return (i + 1);
}

void quickSort(int arr[], int low, int high) {
   if (low < high) {
      int pi = partition(arr, low, high);
      quickSort(arr, low, pi - 1);
      quickSort(arr, pi + 1, high);
   }
}

int main() {
   int arr[] = {64, 25, 12, 22, 11};
   int n = sizeof(arr)/sizeof(arr[0]);
   quickSort(arr, 0, n-1);
   cout << "Sorted array is: \n";
   for (int i=0; i < n; i++)
      cout << arr[i] << " ";
   return 0;
}

Enter fullscreen mode Exit fullscreen mode

Okay! Let's dive into the details of the code sample.

First, we have defined the partition function. This function takes the array, the start index, and the end index as input. Its purpose is to divide the array into two parts, where elements smaller than the pivot element are to the left of the pivot, and elements greater than the pivot are to the right of it. This is done by selecting a pivot element and swapping it with elements until it reaches its correct position in the array.

Next, we have the quickSort function, which is the actual implementation of the Quick sort algorithm. It takes the array, start and end indices as input and uses recursion to sort the array. The function first checks if the start index is less than the end index, which means there is at least one element to sort. If this condition is met, the partition function is called, and the pivot index is returned. This pivot index is used to divide the array into two parts: the left part and the right part. The quickSort function is then called recursively on both the left and right parts, which eventually sorts the entire array.

The time complexity of Quick sort is O(n log n) on average, and its space complexity is O(log n). Quick sort is an efficient sorting algorithm and is widely used in many applications.

And that's it! With this, you now know how Quick sort works, its time and space complexities, and its use cases. Try running this code on your own and see how Quick sort works like magic πŸ§™β€β™‚οΈ!

And there you have it folks! The first part of our sorting algorithm journey πŸš€. We've covered three popular sorting techniques: Bubble sort πŸ’₯, Selection sort πŸ”, and Quick sort πŸ’¨.

Each technique has its own unique strengths and weaknesses, and it's important to understand when to use each one. Whether you're sorting a small list of numbers or a large database, there's a sorting algorithm out there that's right for you πŸ’»πŸ’‘.

So, what's next in our journey? Well, stay tuned for the next part of our sorting algorithm series, where we'll cover even more techniques! πŸ”œ We'll take a closer look at Merge sort πŸ”„, Heap sort 🌳, Shell sort 🐚, Counting sort πŸ’°, Radix sort πŸ”’, and Bucket sort πŸ›’οΈ.

As always, if you have any questions or comments, feel free to reach out πŸ’¬. And remember, happy sorting! πŸŽ‰

Top comments (0)