DEV Community

Discussion on: Improve Your Algorithms with this Simple Equation

Collapse
 
aminmansuri profile image
hidden_dude • Edited

I prefer the simpler proof:

S = 1 + 2 + 3 + . ... + n

S = n + (n-1) + (n-2) + ... + 1


2S = (n+1) + (n+1) + (n+1) + ... + (n+1) = n (n+1)

=> S = n (n+1)/2

Interestingly the formula is O(1) if you consider the addition/multiplication/division to be ATOMIC. But in reality such operations take O(N) time with regard to the LENGTH OF THE INPUT |N| which in this case is Log N.

So the algorithm really takes O(N) time with respect to INPUT SIZE rather than O(1) (but O(logN) time with respect to N's value if you consider the realities of the bitwise math operations). This becomes evident if you start using 1000 bit numbers for example.

This common confusion may lead people to believe that there's a polynomial time solution to the "efficient" fibonacci number:

 def fib(n):
    a, b = 0, 1
    for _ in range(n):
        a, b = b, a+b
    return a

(source: stackoverflow.com/questions/181722...)

But actually since the INPUT SIZE here is Log N, the algorithm is said to be "pseudo-polynomial" because its polynomial (O(N)) with regard to the value of N, but its actually EXPONENTIAL with regard to the input size Log N.

Again this becomes evident when you realize just how many bits you're going to need to represent that result.

Note: this probably has no bearing on your real world algorithm as long as you live within the confines of 32 bit of 64 bit numbers. :)

More on this confusing topic here: cs.stackexchange.com/questions/164...