DEV Community

Cover image for Big O notation and time complexity in a nutshell
just showkat
just showkat

Posted on

Big O notation and time complexity in a nutshell

Big O notation is a way of expressing the time complexity of an algorithm. It provides a rough estimate of how long an algorithm takes to run as the input size grows larger. Understanding time complexity is important for evaluating the efficiency of algorithms and for choosing the right algorithm for a given task.

In this article, we will explore what time complexity is, how it is expressed using Big O notation, and how to analyze the time complexity of different algorithms. We will also look at some common time complexities and examples of algorithms that have those complexities.

What is time complexity?

The time complexity of an algorithm is the number of basic operations that the algorithm performs as a function of the size of the input. For example, if an algorithm takes twice as long to run on an input that is twice as large, it has a time complexity of O(n).

The input size is often represented by the variable n. For example, if we are sorting a list of numbers, n would be the number of elements in the list. If we are searching a database, n would be the number of records in the database.

Time complexity is important because it tells us how the running time of an algorithm scales with the input size. If an algorithm has a time complexity of O(n), it will take longer to run on a large input than it will on a small input, but the running time will increase linearly with the input size. If an algorithm has a time complexity of O(n^2), the running time will increase much faster as the input size increases, because it is proportional to the square of the input size.

How is time complexity expressed using Big O notation?

Big O notation is a way of expressing the upper bound of an algorithm's time complexity. It provides a rough estimate of the maximum number of operations an algorithm will perform as the input size grows larger.

For example, an algorithm with a time complexity of O(n) is at most linear, but it could be faster in practice. An algorithm with a time complexity of O(n^2) is at most quadratic, but it could be faster in practice.

There are several different types of time complexity that can be expressed using Big O notation:

  • O(1) - Constant time. This means that the running time of the algorithm is independent of the input size. An algorithm with a time complexity of O(1) will take the same amount of time to run no matter how large the input is.
  • O(log n) - Logarithmic time. This means that the running time grows logarithmically with the input size. An algorithm with a time complexity of O(log n) will double in running time as the input size is multiplied by 2.
  • O(n) - Linear time. This means that the running time grows linearly with the input size. An algorithm with a time complexity of O(n) will take twice as long to run on twice as large an input.
  • O(n log n) - Log-linear time. This means that the running time grows as the product of the input size and the logarithm of the input size.
  • O(n^2) - Quadratic time. This means that the running time is proportional to the square of the input size. An algorithm with a time complexity of O(n^2) will take four times as long to run on an input that is twice as large.
  • O(n^3) - Cubic time. This means that the running time is proportional to the cube of the input size. An algorithm with a time complexity of O(n^3) will take eight times as long to run on twice as large an input.
  • O(2^n) - Exponential time. This means that the running time grows exponentially with the input size. An algorithm with a time complexity of O(2^n) will take twice as long to run on an input that is one element larger.

It's important to note that Big O notation only provides a rough estimate of an algorithm's time complexity. The actual running time of an algorithm can depend on many factors, such as the specific input data, the hardware and software environment, and the implementation of the algorithm.

Analyzing the time complexity of algorithms

To analyze the time complexity of an algorithm, we need to consider the number of basic operations it performs as a function of the input size. Some common operations that can affect time complexity include

  • Loops: The time complexity of an algorithm can increase based on the number of times a loop is executed. For example, an algorithm that has a nested loop will have a time complexity of O(n^2) if the inner loop iterates over all elements of the input for each iteration of the outer loop.
  • Recursion: Recursive algorithms can have a time complexity that grows exponentially with the input size. This is because each recursive call adds another layer to the stack, and the stack can grow very large for large inputs.
  • Function calls: The time complexity of an algorithm can also be affected by the number of function calls it makes. If an algorithm calls a function that takes O(n) time for each element in the input, the overall time complexity of the algorithm will be O(n^2).

Examples of time complexity

Here are some examples of algorithms and their time complexities:

  • Linear search: A linear search algorithm looks through a list of elements one by one until it finds the element it is looking for. The time complexity of a linear search is O(n), because it takes longer to search a larger list.
  • Binary search: A binary search algorithm looks for an element in a sorted list by dividing the list in half and searching in one half or the other based on whether the element is larger or smaller than the middle element. The time complexity of a binary search is O(log n), because it reduces the search space by half with each iteration.
  • Insertion sort: An insertion sort algorithm sorts a list by iterating through the elements one by one and inserting each element into its correct position in the sorted list. The time complexity of an insertion sort is O(n^2) because it takes longer to sort a larger list and performs more operations for larger lists.
  • Merge sort: A merge sort algorithm sorts a list by dividing it in half, sorting the two halves, and then merging the two sorted lists together. The time complexity of a merge sort is O(n log n) because it takes longer to sort a larger list and performs logarithmic time operations to merge the two sorted lists.

Conclusion

Big O notation is a way of expressing the time complexity of an algorithm. It provides a rough estimate of how long an algorithm takes to run as the input size grows larger. Time complexity is important for evaluating the efficiency of algorithms and for choosing the right algorithm for a given task. By understanding the different time complexities and how to analyze the time complexity of algorithms, we can make informed decisions about which algorithms to use for different tasks.

Top comments (0)