Searching and sorting power modern applications. O(log n) and O(n log n) represent optimal efficiency for these tasks. Knowing them is the difference between a search that returns instantly and one that crawls.
O(log n): Binary Search
Halves the search space each step. Logarithmic growth. To find an element among 1 million sorted items, only ~20 comparisons.
func binarySearch(arr []int, target int) int {
left, right := 0, len(arr)-1
for left <= right { // log n iterations
mid := left + (right-left)/2
if arr[mid] == target {
return mid
}
if arr[mid] < target {
left = mid + 1
} else {
right = mid - 1
}
}
return -1
}
The trick: with each iteration, half of the remaining candidates are eliminated. Requires a sorted input.
O(n log n): Merge Sort
Divide-and-conquer: splits array, sorts halves, merges. Industry standard for general-purpose sorting.
func mergeSort(arr []int) []int {
if len(arr) <= 1 {
return arr
}
mid := len(arr) / 2
left := mergeSort(arr[:mid])
right := mergeSort(arr[mid:])
return merge(left, right)
}
func merge(left, right []int) []int {
result := make([]int, 0, len(left)+len(right))
i, j := 0, 0
for i < len(left) && j < len(right) {
if left[i] < right[j] {
result = append(result, left[i])
i++
} else {
result = append(result, right[j])
j++
}
}
return append(append(result, left[i:]...), right[j:]...)
}
Real-World Usage
Go's sort.Slice() uses similar O(n log n) algorithms (Introsort, a hybrid of quicksort, heapsort, and insertion sort). This is the sweet spot: fast enough for production, simple enough to reason about.
- Binary search trees, B-trees: O(log n) lookups
- Merge sort, heapsort, quicksort (avg): O(n log n)
- Database indexes: built around these complexities
Conclusion
When you see O(log n), think "halving." When you see O(n log n), think "divide, conquer, combine." These two complexities are the workhorses of efficient data access. Most production-grade algorithms live here.
Next up: the complexities to avoid, O(2ⁿ) and O(n!).
Top comments (0)