To be precise:
Big-O notation doesn't have anything inherently to do with program performance, runtime, memory usage, whatever.
It also doesn't have anything to do with "best case", "worst case", "average case", etc.
Big-O notation describes functions -- mathematical operations that take in numbers and spit out numbers.
It turns out this is really useful when we're talking about computer programs, but the concepts themselves are orthogonal. You can talk about programs without using big-O notation by saying specifically what you are counting (e.g., most possible, least possible, "average" (after fixing a distribution of inputs!) taken of nanoseconds, clock cycles, instructions executed, etc).
To be precise:
Big-O notation doesn't have anything inherently to do with program performance, runtime, memory usage, whatever.
It also doesn't have anything to do with "best case", "worst case", "average case", etc.
Big-O notation describes functions -- mathematical operations that take in numbers and spit out numbers.
It turns out this is really useful when we're talking about computer programs, but the concepts themselves are orthogonal. You can talk about programs without using big-O notation by saying specifically what you are counting (e.g., most possible, least possible, "average" (after fixing a distribution of inputs!) taken of nanoseconds, clock cycles, instructions executed, etc).
Thanks for the information, I 'll try to be more precise in the future :)