Introduction
Have you ever wondered why algorithms with the same time complexity behave differently in practice?
While learning sorting algorithms, I noticed something interesting:
Algorithms with the same time complexity can perform very differently in practice.
Instead of only relying on theoretical analysis, I decided to build a small benchmark in C to compare the runtime performance of six sorting algorithms.
This project helped me better understand not only time complexity, but also how real-world performance is affected by implementation details.
Project Repository
What I built
This project implement six classic sorting algorithms:
- Bubble Sort
- Selection Sort
- Insertion Sort
- Quick Sort
- Counting Sort
- Radix Sort
Radix Sort is implemented using counting sort as a subroutine.
Based on my understanding of these algorithms, I created a comparison table below to highlight their fundamental differences.
Comparison Table
| Feature | Bubble | Selection | Insertion | Quick | Counting | Radix |
|---|---|---|---|---|---|---|
| Best Time | O(n) | O(n²) | O(n) | O(n log n) | O(n + k) | O(d × n) |
| Average Time | O(n²) | O(n²) | O(n²) | O(n log n) | O(n + k) | O(d × n) |
| Worst Time | O(n²) | O(n²) | O(n²) | O(n²) | O(n + k) | O(d × n) |
| Space Complexity | O(1) | O(1) | O(1) | O(log n) | O(k) | O(n + k) |
| Stability | Yes | No | Yes | No | Yes | Yes |
| Sorting Type | Comparison | Comparison | Comparison | Comparison | Non-Comparison | Non-Comparison |
| Practical Speed | Slowest | Slow | Slow | Fast | Fastest | Fastest |
| Use Case | Very Small Data | Minimal Memory | Small Data | General Purpose | Integers (Small Range) | Multi-digit Integers / Strings |
In version 1 of this project, I focused on comparing the runtime performance of different sorting algorithms.
Time complexity is one of the most important factors when analyzing algorithms. A simple way to understand it is to think of it as the length of a queue.
The longer the queue, the longer you have to wait.
Different time complexities lead to significant differences in runtime:
- O(n²) → grows very quickly as input size increases(slow)
- O(n log n) → efficient and commonly used in practice
- O(n) → optimal, but usually requires specific conditions
For example, bubble sort performs the worst among the tested algorithms because it involves a large number of repeated operations, especially frequent swaps, which significantly reduces efficiency.
Space complexity is also an important factor, as it determines whether an algorithm is feasible under large data sizes or memory constraints.
In this version of the project, I mainly focused on runtime performance.
In future versions, I plan to further analyze how space usage affects real-world performance.
Another important property of sorting algorithms is stability:
If we think of time complexity as the length of queue, then stability can be understand as whether people are allowed to cut in line.
If no one cuts in line, the order is preserved.
If people cut in line, the original order is disrupted.
Formally, stability means:
For elements with the same key, their relative orders remains unchanged after sorting.
This property becomes especially important when dealing with data that contains additional information beyond the keys.
Sorting algorithms can also be divided into two categories:
- Comparison-based sorting
- Non-comparison-based sorting
In addition, comparison-based sorting algorithms rely on comparisons between elements, so they usually involve more operations and tend to be slower in practice.
They have a theoretical lower bound of O(n log n), but can be applied to any type of data.
Non-comparison-based sorting algorithms do not rely on comparisons, so they can achieve better time complexity in some cases (such as O(n)), but they require specific conditions, such as limited value ranges or specific data types.
Key Design Decisions
The goal of this experiment is not just to implement six sorting algorithms, but to compare their runtime performance under fair conditions.
If the same array is sorted repeatedly, later algorithms would receive already sorted data, which would invalidate the comparison.
To avoid this, I generate a random dataset once and then use memcpy to create six identical copies.
Each sorting algorithms operates on the same input, ensuring a fair comparison.
1.Unified Function Interface
All sorting functions follow the same interface:
void sort(int* arr,int n)
This allows me to use function pointers to call different algorithms through a single benchmarking function.
As a result:
- The code is cleaner
- Repetition is reduced
- New algorithms can be easily added
The time_sort() function is used to measure and print the runtime for each algorithm.
2.Fair Comparison with memcpy
To ensure consistency, I use a "generate once, copy multiple times" strategy.
Instead of generating separate datasets for each algorithm, I duplicate the same array using memcpy.
This eliminates input variation and makes the comparison more reliable.
3.ASCII Visualization
In addition to printing raw runtime values, I implemented a simple ASCII bar chart to visualize performance differences.
This makes the results more intuitive and easier to understand.
4.Algorithm Selection
Instead of only comparing comparison-based sorting algorithms, I also included non-comparison-based algorithms (Counting sort and radix sort).
This allows the experiment to highlight differences not only performance, but also in algorithm design and complexity models.
5.Limitations
It is important to note that this experiment uses clock() to measure runtime.
The results may be affected by:
- Compiler optimizations
- Hardware performance
- Input size and data distribution
Therefore, the results should be interpreted as trends rather than absolute benchmarks.
Actual performance should be analyzed together with time complexity, constant factors, cache behavior, and data characteristics.
Result
Below is a sample output from the benchmark:
From the experimental results, we can observe:
- O(n²) algorithms (bubble, selection, insertion) are significantly slower than the others
- Quick sort performs best in general cases
- Counting sort and radix sort achieve the fastest performance under the current data range
One important observation is that:
Even algorithms with the same time complexity (O(n²)) can show significant performance differences.
This indicates that time complexity alone does not fully determine real-world performance.
Other factors also play an important role, such as:
- Operation type (swapping vs shifting)
- Cache efficiency
- Implementation details
What I Learned
The biggest takeaway from this project is not simply identifying which sorting algorithm is faster, but gaining a deeper understanding of algorithm performance and experimental design.
First, I realized that time complexity only describes growth trends.
Actual performance is also influenced by factors such as operation types, data distribution, and constant factors.
For example, even algorithms with the same time complexity (O(n²)) can show significant differences in runtime.
Second, this project helped me understand the importance of fair comparison. By using the same dataset and duplicating it for each algorithm, I was able to control variables and produce more reliable results.
Finally, I learned that different sorting algorithms serve different purposes.
There is no single “best” algorithm.
The choice of algorithm depends on factors such as data size, data characteristics, and space constraints.
In summary, this project is not only about implementing sorting algorithms, but also about learning how to evaluate and compare them properly.
Next Steps
For version 2 of this project, I plan to test stability.
The idea is to attach original indices to elements with the same value,
and then verify whether each sorting algorithm preserves their relative order.
This will help further analyze the behavior of stable vs unstable sorting algorithms in practice.
GitHub
Source code:

Top comments (0)