Modern algorithms can be complex. Big O Notation helps developers measure performance by analyzing how runtime grows with input size. Understanding O(1), O(n), and O(n²) is essential for writing efficient code.
Big O describes the worst-case scenario. It captures how an algorithm scales as n (input size) increases.
O(1): Constant Time
Accessing an array element by index. Always 1 operation regardless of size.
package main
import "fmt"
func getFirst(arr []int) int {
return arr[0] // Always O(1)
}
func main() {
arr := []int{10, 20, 30, 40}
fmt.Println(getFirst(arr)) // Output: 10
}
O(n): Linear Time
Scanning an entire array. Operations grow proportionally with n.
func findMax(arr []int) int {
max := arr[0]
for i := 1; i < len(arr); i++ { // n iterations
if arr[i] > max {
max = arr[i]
}
}
return max
}
O(n²): Quadratic Time
Nested loops. Disastrous for large n (n=1000 → 1M operations).
func bubbleSort(arr []int) []int {
n := len(arr)
for i := 0; i < n-1; i++ {
for j := 0; j < n-i-1; j++ { // n*n iterations
if arr[j] > arr[j+1] {
arr[j], arr[j+1] = arr[j+1], arr[j]
}
}
}
return arr
}
Why It Matters
O(n²) works for n=100 but crashes at n=10,000. Choosing the right complexity class is the difference between a feature that scales and one that times out under load.
- O(1): instant, regardless of input
- O(n): predictable, scales linearly
- O(n²): fine for small inputs, deadly at scale
Conclusion
Time complexity is the language engineers use to reason about performance before writing a single benchmark. Master O(1), O(n), and O(n²) first. They show up in nearly every algorithm you'll touch.
Next up: O(log n) and O(n log n), the complexities that power search and sort.
Top comments (0)