The 400x Speed Gap Nobody Talks About
Linear search gets demolished by binary search on sorted data. Everyone knows that in theory. But here's what surprised me: on a 1 million element array, binary search wasn't just faster — it was 418 times faster in the worst case. And in the average case? The gap widened to over 600x.
Most coding interview prep skips the actual performance measurement. They show you the $O(n)$ vs $O(\log n)$ notation and move on. But when you're sitting in an interview and the interviewer asks "which algorithm would you choose for this scenario," the answer depends on details that Big-O notation doesn't capture: dataset size, whether the data is sorted, memory constraints, and how often you're searching.
I ran both algorithms on the same 1 million element dataset with different scenarios: best case, worst case, average case, and repeated searches. The results show exactly when linear search is acceptable (hint: almost never on large datasets) and when you'd be crazy not to use binary search.
How Each Algorithm Works
Linear search is dead simple: start at index 0, compare each element to your target, return when you find it (or reach the end). The search time grows linearly with array size. If you're looking for the last element in a million-item list, you check all million items.
Continue reading the full article on TildAlice
Top comments (0)