You are correct -- O(h) and O(n) are the same here. Revisiting the example about inserting a sequence of numbers, let's say, [1, 2, 3, 4, 5] from left to right, will result in:
1
\
2
\
3
\
4
\
5
Now, imagine laying that flat.... 1 -> 2 -> 3 -> 4 -> 5. Linked list. Right? Worst-case search in a singly-linked list is linear - O(n). So in the context of trees, denoting runtime with n or h is okay so long as you and I and who we are communicating it to also understands what we are referencing :)
Yeah, self-balancing trees are such an interesting topic. If you asked me to implement it, probably can't off the top of my head. The rotations are non-trivial, but it's fun to know about the amazing benefits they provide -- guaranteeing performant runtimes by keeping the height as minimal as possible and not end up with the above example for very large data sets :)
I look forward to more! Keep up the awesome posts :)
Junior Developer at Interplay Learning - Feel free to contact me via LinkedIn or connect on Github, I am always happy to chat with folks from this community!
Thanks for that explanation, it helps! Talking with you has made me more excited to dive into this more so stay tuned because there's a good chance I'll post about self-balancing trees soon! Thanks for your support :)
You are correct -- O(h) and O(n) are the same here. Revisiting the example about inserting a sequence of numbers, let's say,
[1, 2, 3, 4, 5]
from left to right, will result in:Now, imagine laying that flat....
1 -> 2 -> 3 -> 4 -> 5
. Linked list. Right? Worst-case search in a singly-linked list is linear - O(n). So in the context of trees, denoting runtime with n or h is okay so long as you and I and who we are communicating it to also understands what we are referencing :)Yeah, self-balancing trees are such an interesting topic. If you asked me to implement it, probably can't off the top of my head. The rotations are non-trivial, but it's fun to know about the amazing benefits they provide -- guaranteeing performant runtimes by keeping the height as minimal as possible and not end up with the above example for very large data sets :)
I look forward to more! Keep up the awesome posts :)
Thanks for that explanation, it helps! Talking with you has made me more excited to dive into this more so stay tuned because there's a good chance I'll post about self-balancing trees soon! Thanks for your support :)
It'd be awesome! Your enthusiasm for this stuff is great, too! Looking forward to it!