Last time I talked about arrays, but I realized I was discussing time complexity without explaining what time and space complexity actually are. So, before diving into algorithms, I'd like to take a moment to explore these fundamental concepts.
In simple terms, an algorithm is a set of instructions that tells a computer how to perform a task, or a step-by-step solution for a problem when it's not related to computer science.
To measure an algorithm's effectiveness, we use the Big-O notation. Why do we need to measure algorithms? To ensure they're the most optimal solution.
This is crucial because inputs often grow exponentially, and an inefficient algorithm will consume excessive time and space to complete, potentially causing memory leaks and system freezes.
When evaluating effectiveness, we always consider the worst-case scenario.
Space complexity
Space complexity measures how much extra memory an algorithm requires relative to its input size. As the input grows, the memory requirements increase accordingly.
The space complexity includes:
- Temporary variables
- Auxiliary data structures
- Recursive calls
- This metric is crucial for systems with limited memory, where an algorithm that's inefficient in terms of memory usage may be unacceptable, even if it's time-efficient.
Time complexity
Time complexity describes how long an algorithm will take to execute in relation to the input size. It allows us to measure an algorithm's efficiency without executing it, helping us determine if it will remain scalable as the input size grows.
Big-O
Time and space complexity are measured using Big-O notation, which employs algebraic terms to describe the complexity of your code.
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. - Wikipedia definition
When it comes to Big-O notation, you need to let go of your anxiety and always think about the worst-case scenario. This way, the Big-O notation of your algorithm represents the worst-case performance, what happens when the input tends toward infinity.
For this, we have different algebraic representations:
O(1): Constant time
Indicates that the execution time of an algorithm remains constant regardless of input size. This algorithm is super fast.
Imagine you have a box filled with items, and you know exactly where each item is located. To get a specific item, you go directly to its location, taking the same amount of time irrespective of how many items are in the box.
O(log n): Logarithmic time
Describes an algorithm's runtime that grows logarithmically with the input size (n). As the input gets larger, the execution time increases very slowly. This makes logarithmic algorithms very fast and efficient.
Example:
Imagine you want to find a word in a dictionary: you open the book to the middle, then look for your word based on if it is before or after the current page, you discard the other half of the remaining pages, and repeat until you find your word.
O(n): Linear time
Signifies that the execution time of the algorithm grows linearly in proportion to the size of the input data (n). This algorithm is pretty fast.
Example:
Imagine you have a list of groceries. To find a specific item (like milk), you might need to scan through the entire list. If the list has 5 items, it’ll take a relatively short time. But if the list has 500 items, it’ll take considerably longer.
O(n log n): Linear logarithmic
This is a combination of linear and logarithmic time, so for n
items, it performs a logarithmic time operation for each of the n
items. This algorithm is slower than the previous ones but still efficient.
Example:
Imagine you need to sort a card deck: you split the deck repeatedly and then you merge the pairs of cards in order.
O(n²): Quadratic
Here we enter risky territory. This means your algorithm's execution time grows proportionally to the square of the input size. Double the input? Time increases by 4×. Triple the input? Time increases by 9×. And so on. This algorithm is notably slow.
Example:
Let’s use the same card deck example for O(n log n). But this time imagine that when you are sorting the cards, you are comparing every card with every other card.
O(2ⁿ): Exponential
This is the riskiest yet. It means no efficient solution exists... yet. This algorithm's execution time doubles with every single item added to the input.
To visualize this: 5 items -> 32 executions. 10 items -> 1024 executions. 20 items -> 1048576 executions
This algorithm is extremely slow and you might want to consider another career path.
Example:
Imagine you find a chest with 50 dials, each with two choices: up and down. To crack it you try every possible combination. Well, you will be able to open it, but… around a thousand years.
O(n!): Factorial
This is the absolute worst time complexity. It means the algorithm's execution time grows according to the factorial of the input size. To put this in perspective, a factorial time complexity of 100 results in a number so enormous it has no name. This algorithm would take practically forever to complete, you could earn an entire new bachelor's degree before it finishes running.
Imagine yourself rearranging a bookshelf but, testing every possible order of books. Just for the top ten best seller’s. See you when the universe implodes.
We haven't discussed specific space complexity measurements, but you should now have a good grasp of how it works with constant, linear, or even factorial complexity. By the way, there are algorithms with time complexity even worse than factorial—but your PC would likely crash before completing such algorithms.
After reading this, you should understand the importance of measuring algorithms and why learning Data Structures and Algorithms (DSA) is valuable. Now we can move on to some examples to help with your interview preparation.
Top comments (0)