Apologies for bad notation. Also, I'm not a native speaker.
I started reading this book:
I am currently reading the first chapter, which is about time complexity / asymptotic notation.
So I'm told that for an algorithm whose running time is described by the function 𝑇(𝑛) = 3𝑛³ + 2n² we have a worst-case complexity of O(𝑛³). Fine.
I'm told that I can easily prove that T(n) ≤ 5𝑛³. Fine. (I'm pretty confident in my highschool maths skills.)
Basically, I ended up with T(𝑛) ≤ 3.0...01𝑛³ for 𝑛 tends to +∞. A different value of 𝑐.
It might sound dumb that 3𝑛³ + something could be smaller than 3𝑛³ but I found this using limits.
My reasoning was that, after manipulating the inequality T(n) ≤ 𝑥𝑛³, one could end up with the following statement:
Now 𝑛 is just a positive integer constant. So let it be 1; smallest input. We end up with 𝑥 = 5. The value provided by the book.
But let's say that 𝑛 gets bigger and bigger; and eventually tends to +∞. Since 𝑛-2 would then tend to 0, we have 𝑥 tends to 3.
So T(𝑛) ≤ 𝑥𝑛³ with 𝑥 a real number that tends to 3 from upper values but is not exactly 3. In fact, it wouldn't work with 3.
Click here if you can't see the video.
C is the point at which 𝑥𝑛³ gets bigger than T(𝑛).
First of all, is my reasoning correct? Can I use this kind of reasoning when dealing with algorithms complexity? I'm pretty confident that it "holds" mathematically speaking but does it apply here?
I understand that it is not really important since we end up with the same O() but still, just curious.