Is there a computer science topic more terrifying than Big O notation? Don’t let the name scare you, Big O notation is not a big deal. It’s very ea...
[Read Full]

Big theta and big Omega are upper bounds and lower bounds, right? But in software engineering, we don't mind the upper or lower bounds that's why we use big O? At least that's what I remembered from the book cracking the coding interview. Not sure though. lols

Not quite. Big O is the upper-bound (longest possible run time), while big-Ω is the lower bound (shortest possible run time). It's big-Θ that's the combination of the two, although it can only be used if the upper and lower bounds are the same.

While big-O is the most commonly useful, since we most often care about the longest possible running time of a particular scenario (e.g. of worst-case), it still has a precise meaning.

When we say "the function is at least O(n)", we're almost always misappropriating big-O (upper bound) to describe big-Ω (lower bound).

What's further, when we say "the function always has a runtime of O(n²)", we're really meaning "Θ(n²)"; it's not just the upper bound, but both bounds at once.

To put that another way, "lower bound" means "a runtime of at least", and "upper bound" means "a runtime no greater than". This has nothing to do with best-case/worst-case, as we are always speaking about the runtime of worst-case, unless explicitly stated otherwise.

Take a look at my comment on that article I linked to for more details.

O (log n) is extremely common - and in fact the one that will save you the most time if you've ever dealt with a database slowing down. Just plopping an index on a string value will turn it from an O(n) operation (sequential scan) to an O (log n) operation - divide and conquer. Usual caveats here: databases tend to have some tweaks under the covers, but the typical index is BTREE or the like, which is O (log n).

You can quantify the time savings using the math in this post. On 1000 rows a sequential scan has to do 1000 comparisons to find the record(s) you want. With an index in place, it does log2(1000), which is ~10 (or 10 in our case). That's pretty good - and you an see why an index like that would scale really well.

In terms of algorithms you write - you'd be surprised at the optimizations you can get if you're using something like Redis, which doesn't have indexes (unless you create them). Since it's in memory though, you can scan a list for a given value using binary search - pretty damn useful :).

O(log n) is actually not very common.
You have fast exponentiation if you wanna explore.
At best, you'll encounter O(n*log n) in general for algorithms.

## What is O(log n)? Learn Big O Logarithmic Time Complexity

## Jared Nielsen on February 21, 2020

Meanwhile, don't forget about big-Θ and big-Ω!

## What exactly is big O, big Θ, big Ω ?

## pontakornth ・ Jan 15 ・ 1 min read

Big theta and big Omega are upper bounds and lower bounds, right? But in software engineering, we don't mind the upper or lower bounds that's why we use big O? At least that's what I remembered from the book cracking the coding interview. Not sure though. lols

Not quite. Big O is the upper-bound (longest possible run time), while big-Ω is the lower bound (shortest possible run time). It's big-Θ that's the combination of the two, although

itcan only be used if the upper and lower bounds are the same.While big-O is the most commonly useful, since we most often care about the longest possible running time of a particular scenario (e.g. of worst-case), it still has a precise meaning.

When we say "the function is at least O(n)", we're almost always misappropriating big-O (upper bound) to describe big-Ω (lower bound).

What's further, when we say "the function always has a runtime of O(n²)", we're really meaning "Θ(n²)"; it's not just the upper bound, but

bothbounds at once.To put that another way, "lower bound" means "a runtime of at least", and "upper bound" means "a runtime no greater than".

This has nothing to do with best-case/worst-case, as we are always speaking about the runtime of worst-case, unless explicitly stated otherwise.Take a look at my comment on that article I linked to for more details.

// , If it gets really big, it's big-Θ-Lawd: youtube.com/watch?v=zlplCs00wTE

"Your time complexity is so bad, not even the TARDIS can handle it."

Nice explanation.

Can you provide some other common problems which can be solved in O(logn) ?

TIA

O (log n) is extremely common - and in fact the one that will save you the most time if you've ever dealt with a database slowing down. Just plopping an index on a string value will turn it from an O(n) operation (sequential scan) to an O (log n) operation - divide and conquer. Usual caveats here: databases tend to have some tweaks under the covers, but the typical index is BTREE or the like, which is O (log n).

You can quantify the time savings using the math in this post. On 1000 rows a sequential scan has to do 1000 comparisons to find the record(s) you want. With an index in place, it does log2(1000), which is ~10 (or 10 in our case). That's pretty good - and you an see why an index like that would scale really well.

In terms of algorithms you write - you'd be surprised at the optimizations you can get if you're using something like Redis, which doesn't have indexes (unless you create them). Since it's in memory though, you can scan a list for a given value using binary search - pretty damn useful :).

O(log n) is actually not very common.

You have fast exponentiation if you wanna explore.

At best, you'll encounter O(n*log n) in general for algorithms.

Thanks.

A simple example is binary searching a number in an ordered array.

Thanks for sharing such a great Concept in Easy Way. Kudos,Bro👏