DEV Community

Chaitanya
Chaitanya

Posted on

Big-Oh(O) does not give you always the worst-case, But why?

The running time complexity of an algorithm is given by the run-time function of that algorithm. All the run-time functions are expressed in terms of mathematical functions. sometimes these run-time functions are compared with the standard mathematical functions like exponential functions, quadratic functions, and more. The input variables of the function are compared to inputs of the algorithm and the running time of the algorithm is the output of the mathematical functions. This comparison gives an understanding of the variation of the out with the change in the input variables which is an analogy to the change in the run-time of the algorithm with the change in the input size. Formally this comparison and analysis of the functions are called asymptotic analysis of an algorithm. The well known asymptotic notations are as below:

  • Big-Oh
  • Theta
  • Omega

All these asymptotic analyses give you the order in which the growth of the function depends on a change in the input of the function. The growth of the functions is considered and compared with the slopes and the manner how they grow. All these functions use auxiliary variables to give you a generic description and definition.

Big-Oh

The Big-oh notation is defined as the upper bound of the given run-time function of the algorithm. This Big-Oh notation gives the functions which have a higher growth rate than the calculated run-time function where the starting input condition is given n0 or upper bound functions.
Mathematically the Big-Oh notation is given as below:

O(g(n))={f(n):0f(n)cg(n),nn0} O(g(n)) = \lbrace f(n): 0\leq f(n) \leq c*g(n), \forall n \geq n0\rbrace

Here g(n) is the function calculated for the run-time of an algorithm. And the O(g(n)) depicts the set of functions whose growth is higher than the computed function of some constant c. If the algorithm which is considered for the computation is the worst-case related scenario then the function g(n) gives the worst-case runtime function and the O(g(n)) is the worst-case runtime asymptotic notation. Basically, O(g(n)) is the set of functions and each of the functions in the set has a higher sloper than the g(n) by some constant 'c' commonly called as upper bound functions of f(n).

So the Big-oh notation gives you the upper bound functions of the input function irrespective of the run-time it is calculated. If the input function of the Big-oh is the best time/ minimum run-time of an algorithm then it gives the upper bound function of that input function. If the function is the average run-time function then the Big-oh notation gives the function which has greater growth than the given input functions with the change in the size of the input.
image

And the other two notations are described below

Theta

The theta notation gives the functions which are asymptotically lower or which have lower slopes than the given run-time function. As in the asymptotic upper bound notation which is described with the set, THETA notation is also described using in a similar manner.

The theta notation is defined as a notation that gives the functions with tight boundary limits of given run-time function by some constants. Asymptotic tight bound is denoted with the Greek letter theta.

Mathematically it is defined as below:

Θ(g(n))={f(n):0c1g(n)f(n)c2g(n),nn0} \Theta(g(n)) = \lbrace f(n): 0\leq c1*g(n) \leq f(n) \leq c2*g(n), \forall n \geq n0\rbrace

The function g(n) is defined using the run-time of the algorithm, and the set is defined using the rules in such a way that the function g(n) sandwiches the function f(n) by some constants c1 and c2.

Omega

Asymptotic lower bounds give you the function with lower slopes than the calculated run-time function f(n) by some constants c.
Mathematically the asymptotic lower bound notation is defined as below:

Ω(g(n))={f(n):0cg(n)f(n),nn0} \Omega(g(n)) = \lbrace f(n): 0\leq c*g(n) \leq f(n), \forall n \geq n0\rbrace

All the functions in the set would have slower growth slopes than the computed run-time function of an algorithm. So with the change in the input to the run-time function the growth observed in the set of functions would not be a greater than the run-time function itself.

Image source: Internet.

Top comments (0)