DEV Community

Cover image for The Making of a 10x Engineer in AI Era: Why Big-O Thinking Changes Everything
Tito
Tito

Posted on

The Making of a 10x Engineer in AI Era: Why Big-O Thinking Changes Everything

Introduction

Some code works. Some code lasts.

The difference rarely comes down to typing speed, syntax mastery, or how many nights you're willing to push through. It comes down to how you think about a problem before you write a single line.
Exceptional engineers don't start with code. They start with understanding: visualizing the shape of the problem, weighing trade-offs, and asking how a solution will hold up when the data grows, the users multiply, or the pressure mounts.
At the center of that mindset is a tool every serious engineer learns to trust: Big-O notation.

What Big-O Actually Does ?

Big-O notation is a mathematical framework that describes how an algorithm performs as its input grows. In plain terms, it answers one question:

"How well will this solution scale?"

It strips away the noise machine specifications,compiler differences,network latency and gives you a high-level measure of efficiency. Specifically, it describes the worst-case growth rate of an operation relative to input size n.

This is what lets you compare approaches and choose the one that stays reliable as your data grows from hundreds to millions.

The Big-O Hierarchy: From Best to Worst

Below is the spectrum of common complexity classes, ordered from most to least efficient with examples.


1. O(1) — Constant Time

The gold standard. Input size is irrelevant.
Whether your dataset has 10 items or 10 million, the operation takes the same time. Build for O(1) wherever you can.

a. Accessing an Array Element by Index

arr = [10, 20, 30, 40]
value = arr[2]
print(value)
Enter fullscreen mode Exit fullscreen mode

b. Inserting and Accessing a Value into a Hash Map (Dictionary)

my_dict = {"firstName":"William","age":54}
my_dict["country"]="Kenya"
#accessing
value = my_dict["firstName"]
Enter fullscreen mode Exit fullscreen mode

c. Stack Operations (Push / Pop)

stack.append(10)   
stack.pop()       
Enter fullscreen mode Exit fullscreen mode

d. Queue Operations (Enqueue / Dequeue)

from collections import deque
q = deque()
q.append(1)     
q.popleft()     
Enter fullscreen mode Exit fullscreen mode

e. Checking the Length of a List

print(len(arr))
Enter fullscreen mode Exit fullscreen mode

Judgement: O(1) is acceptable


2. O(log n) — Logarithmic Time

Each step dramatically shrinks the remaining problem.

Searching 1,000,000 items takes roughly 20 comparisons. That's not a typo. Logarithmic algorithms are extraordinarily efficient at scale.

Binary Search

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1

Enter fullscreen mode Exit fullscreen mode

Judgement: O(log n) is acceptable


3. O(n) — Linear Time šŸ“ˆ

Proportional growth. Double the data, double the work.

# Scanning every item once to find the maximum
def find_max(numbers):
    maximum = numbers[0]
    for num in numbers:       # visits every element exactly once
        if num > maximum:
            maximum = num
    return maximum
Enter fullscreen mode Exit fullscreen mode

Judgement: O(n) is acceptable


4. O(n log n) — Linearithmic Time

The sweet spot for sorting. Far better than quadratic.
a. Merge Sort

def merge_sort(arr):
    if len(arr) <= 1:
        return arr
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])  
    right = merge_sort(arr[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):   
        if left[i] <= right[j]:
            result.append(left[i]); i += 1
        else:
            result.append(right[j]); j += 1
    return result + left[i:] + right[j:]
print(merge_sort([38, 27, 43, 3, 9, 82, 10]))
# Result : [3, 9, 10, 27, 38, 43, 82]
Enter fullscreen mode Exit fullscreen mode

Python's native sort() runs at O(n log n). When you see a well-built sorting algorithm, you're almost certainly looking at this class.

Judgement: O(n log n) not Badly off.


5. O(n²) — Quadratic Time

Nested loops are a red flag. Performance falls off a cliff.

def find_duplicates(arr):
    duplicates = []
    for i in range(len(arr)):              # O(n)
        for j in range(i + 1, len(arr)):   # O(n) again
            if arr[i] == arr[j]:
                duplicates.append((arr[i], arr[j]))
    return duplicates
print(find_duplicates([1, 3, 4, 2, 3, 1, 5]))
# [(1, 1), (3, 3)]
Enter fullscreen mode Exit fullscreen mode

Judgement: O(n²) if you can avoid it the better.


6. O(n³) — Cubic Time 🧊

Rarely acceptable outside academic settings.

def three_sum_brute(arr):
    n = len(arr)
    triplets = []

    for i in range(n):                  
        for j in range(i + 1, n):     
            for k in range(j + 1, n):  
                if arr[i] + arr[j] + arr[k] == 0:
                    triplets.append((arr[i], arr[j], arr[k]))
    return triplets
print(three_sum_brute([-3, -1, 0, 1, 2, 3, -2]))
# [(-3, 1, 2), (-3, 0, 3), (-1, 0, 1), (-2, 0, 2), (-2, -1, 3)]
}
Enter fullscreen mode Exit fullscreen mode

Judgement: O(n³) if you can avoid it the better.


7. O(2ⁿ) — Exponential Time šŸ’£

Each added input *doubles the workload. A design smell.*

# Naive recursive Fibonacci — the most famous exponential trap
def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)  # Recalculates the same values endlessly

# The fix: memoisation collapses this to O(n)
from functools import lru_cache

@lru_cache(maxsize=None)
def fibonacci_fast(n):
    if n <= 1:
        return n
    return fibonacci_fast(n - 1) + fibonacci_fast(n - 2)
Enter fullscreen mode Exit fullscreen mode

fibonacci(50) without memoisation makes over 2 trillion calls. The memoised version makes exactly 50. This is the power of understanding why complexity matters, not just what it is.

Judgement: O(2ⁿ) if you can avoid it the better.


8. O(n!) — Factorial Time

Computationally prohibited for almost everything.

# Brute-force Traveling Salesman Problem
from itertools import permutations

def traveling_salesman_brute(graph, start):
    cities = list(graph.keys())
    cities.remove(start)
    best = float('inf')

    for perm in permutations(cities):   # n! permutations
        route = [start] + list(perm) + [start]
        cost = sum(graph[route[i]][route[i+1]] for i in range(len(route)-1))
        best = min(best, cost)

    return best
Enter fullscreen mode Exit fullscreen mode

Judgement: O(n!) avoid it.


Why This Changes How You Think

Big-O is not an academic exercise. It's a mode of thinking.

Once it becomes second nature, something shifts in how you approach problems:

  • You anticipate bottlenecks before they reach production
  • You design systems that scale rather than systems that survive
  • You make deliberate trade-offs between time complexity and space complexity
  • You read code differently — a nested loop is no longer "just syntax", it's a question mark

Most importantly, you stop asking "does this work?" and start asking "how does this behave at scale?"

Found this useful? Drop a ā¤ļø and follow for more pieces on engineering thinking, systems design, and the habits that separate good engineers from great ones.

Top comments (0)