Runtime complexity is a term that we use to describe how performant an algorithm is.
We use runtime complexity to compare different solutions to a given problem or different algorithms for solving a given problem in the context of an interview.
So in the context of interviewing you are going to be asked very frequently what the runtime complexity of a given solution is.
In other words, runtime complexity in mathematical notation is called Big-O.
Let’s see some common time complexities described in the Big-O notation.
Time Complexities
O(1) - Constant Time
The fastest possible running time for any algorithm. An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size. This is the ideal runtime for an algorithm.
O(n) - Linear Time
Describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. So a set twice the size will take twice the time. You probably don't want to put a million objects into one of these.
O(log n) - Logarithmic Time
It basically means time goes up linearly while then goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.
O(n²) - Quadratic Time
Represents an algorithm whose performance is directly proportional to the squared size of the input data set. This is common with algorithms that involve nested iterations over the data set. As the input increases, the time to run the algorithm grows at the rate of its square.
O(2n) - Quasilinear Time
An algorithm is said to have an exponential time or O(2^n) if its runtime doubles with each addition to the input data set, starting off very shallow, then rising meteorically.
Take a look at Array Sorting Algorithms cheatsheet!
I hope you will find this article helpful.
Top comments (0)