DEV Community

Cover image for What is O(log n)? Learn Big O Logarithmic Time Complexity

What is O(log n)? Learn Big O Logarithmic Time Complexity

Jared Nielsen on February 21, 2020

Is there a computer science topic more terrifying than Big O notation? Don’t let the name scare you, Big O notation is not a big deal. It’s very ea...
Collapse
 
hpj1992 profile image
Harshit

Nice explanation.

Can you provide some other common problems which can be solved in O(logn) ?

TIA

Collapse
 
robconery profile image
Rob Conery • Edited

O (log n) is extremely common - and in fact the one that will save you the most time if you've ever dealt with a database slowing down. Just plopping an index on a string value will turn it from an O(n) operation (sequential scan) to an O (log n) operation - divide and conquer. Usual caveats here: databases tend to have some tweaks under the covers, but the typical index is BTREE or the like, which is O (log n).

You can quantify the time savings using the math in this post. On 1000 rows a sequential scan has to do 1000 comparisons to find the record(s) you want. With an index in place, it does log2(1000), which is ~10 (or 10 in our case). That's pretty good - and you an see why an index like that would scale really well.

In terms of algorithms you write - you'd be surprised at the optimizations you can get if you're using something like Redis, which doesn't have indexes (unless you create them). Since it's in memory though, you can scan a list for a given value using binary search - pretty damn useful :).

Collapse
 
omicron666 profile image
Omicron

O(log n) is actually not very common.
You have fast exponentiation if you wanna explore.
At best, you'll encounter O(n*log n) in general for algorithms.

Collapse
 
hpj1992 profile image
Harshit

Thanks.

Collapse
 
jjant profile image
Julian Antonielli

A simple example is binary searching a number in an ordered array.

Collapse
 
codemouse92 profile image
Jason C. McDonald
Collapse
 
artoodeeto profile image
aRtoo

Big theta and big Omega are upper bounds and lower bounds, right? But in software engineering, we don't mind the upper or lower bounds that's why we use big O? At least that's what I remembered from the book cracking the coding interview. Not sure though. lols

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

Not quite. Big O is the upper-bound (longest possible run time), while big-Ω is the lower bound (shortest possible run time). It's big-Θ that's the combination of the two, although it can only be used if the upper and lower bounds are the same.

While big-O is the most commonly useful, since we most often care about the longest possible running time of a particular scenario (e.g. of worst-case), it still has a precise meaning.

When we say "the function is at least O(n)", we're almost always misappropriating big-O (upper bound) to describe big-Ω (lower bound).

What's further, when we say "the function always has a runtime of O(n²)", we're really meaning "Θ(n²)"; it's not just the upper bound, but both bounds at once.

To put that another way, "lower bound" means "a runtime of at least", and "upper bound" means "a runtime no greater than". This has nothing to do with best-case/worst-case, as we are always speaking about the runtime of worst-case, unless explicitly stated otherwise.

Take a look at my comment on that article I linked to for more details.

Thread Thread
 
v6 profile image
🦄N B🛡

// , If it gets really big, it's big-Θ-Lawd: youtube.com/watch?v=zlplCs00wTE

Thread Thread
 
codemouse92 profile image
Jason C. McDonald

"Your time complexity is so bad, not even the TARDIS can handle it."

Collapse
 
maheshthedev profile image
Mahesh Sv

Thanks for sharing such a great Concept in Easy Way. Kudos,Bro👏