Since my last post I've been working on and off getting back into the flow of code.
In computer science, Big O is a measurment used to describe the efficiency of algorithms. There are best, worse and average cases that can come about depending on the data structures, algorithms and languages used.
Common runtimes include O(N), O(log N), O(N log N), O(N2) and O(2N).
Runtimes are determined by the amount of operations and memory(O(1) a constant as an example) used in programming.
With regard to memory(O(1)), also know as space complexity, can affect big O since the computer has to reserve memeory to use for operations.
Best and worst case scenarios are determined by what the algorithm is trying to do. For example of going through an array of near sorted items for bubble sort would be O(N) however in most cases the items will have to be sorted first, thus we'd be left with O(N2) since it has to recursively go through and check each element that isn't sorted.
Anyway this was my first foray into inderstanding big O theory and constructive criticism and guidance os greatly appreciated.
If there is anything that I'm missing or could elaborate on, please let me know.
Jonathan Thomas, future Software Engineer