I took a course at Khan Academy but I didn't understand it yet. I think understand these would improve my understanding of alorithms.
For further actions, you may consider blocking this person and/or reporting abuse
I took a course at Khan Academy but I didn't understand it yet. I think understand these would improve my understanding of alorithms.
For further actions, you may consider blocking this person and/or reporting abuse
Chen Debra -
Sabrina Boby -
Mike Young -
Mike Young -
Top comments (3)
In all three notations, you're measuring the growth of the running time as
n
increases. The difference between them is how much information you have.Big-Θ is bounded on both ends, meaning you know that the running time should (generally) be no greater than, say,
k₂ * n² + 3n
, and no less thank₁ * n² + 3n
. In fact, you're measuring your worst-case scenario with some certainty here. We'd say, then, that the algorithm in question isΘ(n²)
, since we throw away the constants (k₁
andk₂
) and lower order terms (3n
).The trouble is, most algorithms aren't always going to run worst-case every time. If we only want to measure the upper-bound, but just suffice to say "it's never greater than this, but it could less", that's what big-O is for; it measures the asymptotic upper bounds.
So, in our preceding example, if we know worst-case is
k₁ * n² + 3n
, orΘ(n²)
, but the best-case is something less than that, likeΘ(n)
, suddenly big-Θ is no longer useful. We can't express bounds like that. Thus, we instead say the algorithm isO(n²)
: the runtime won't be greater thann²
, but it could be less (n
, in this case).In other words,
n²
is the upper bound, but the lower bound could be below that. Big-O does not define a lower bound. The purpose of big-O, then, is to admit that we are leaving some information out.To put that yet another way, big-O is accurate for all cases by sacrificing part of its precision. You're saying "it won't be any worse than this". Unlike with big-Θ, you've eschewed the lower bound.
Big-Ω, then, is the exact opposite. You're providing the asymptomatic lower bound, but omitting the upper bound. You're saying "it won't be any better than this, but it could be worse." If the example algorithm has a best-case runtime of
Ω(n)
, we could accurately say that the algorithm isΩ(n)
.This doesn't mean that
Ω(n)
is automatically the measurement of the best-case. Unless otherwise stated, we are always describing the worst-case scenario, as that's the most useful thing to measure in algorithmic efficiency.You can probably see, then, why we almost always only use big-O in discussing algorithms: it's the only one of the three to be accurate for all cases and denote the worst-case scenario. Big-Θ is only useful for denoting specific scenarios, or in the rare algorithm where
O(f(n)) == Ω(f(n))
.In any case, our goal is ultimately to achieve the most accuracy and precision possible in our description of an algorithm or a scenario. Of course, since we can't accurately describe most algorithms with just big-Θ, we'll usually throw away the lower bound and describe it in terms of big-O instead.
Side note: This means that a lot of people use this wrong. Ever seen this?
That's probably not what is meant, because in both cases, the only the upper-bound of each scenario is being denoted. I'm fairly certain the lower-bound of the algorithm is known if the best-case scenario is being described.
If both bounds of a scenario are known, we should be more precise. We could should generally use big-Θ instead if we know both:
Or, if we only know the upper bound of the worst-case and lower bound of the best-case, we could use big-O and big-Ω. Just be sure to denote when you're not describing worst-case!
That nicely fences in the entire algorithm.
EDIT: Please see @codemouse92's response on this thread for a more thorough and correct explanation!
These terms are ways of describing the runtime of an algorithm in terms of the size of the input, or time complexity. If your input size is
n
, sayingO(n)
means you expect the runtime of this operation to scale linearly with the input. If you have four items, it will take twice as long as with two. A worse runtime isO(n^2)
- this means your runtime increases exponentially as your input increases - you've probably got a nested loop or something. A Big-O of 1 -O(1)
- is called constant time. The runtime of the operation is not dependent on the size of the input at all.Big-O describes the worst case of an algorithm - your algorithm will take at most that long if your data is in the furthest possible state from the expected output. Conversely, Big-Ω is the best case - or, in other words, the lower bound. For a given input size, you can expect this algorithm to take at least that long. If your set is already sorted at input, how long does it take your sorting algorithm to notice this and exit?
Big-Θ is in the middle. It describes the expected average runtime. It gets more accurate as you input size increases, but for any workload, you can state that you expect the runtime to be between a lower bound and an upper bound. A solid quote from Khan Academy:
Close, but not quite. (I'll admit, Khan Academy's explaination is more than a little vague at times.) Big-O, big-Θ, and big-Ω can all be used to measure the running time of the worst-case. The difference is which bounds are defined by the notation.
I'm putting my entire answer in a separate comment.