Big O Notation is a concept that has been incredibly intimidating for me during my transition into programming. I had this preconceived notion that Big O Notation involved complex math and knowing a multitude of algorithms. As it turns out, Big O Notation is not all that scary, in fact, it makes a lot of sense once you do a little bit of research.
This blog post is meant as a high level overview for someone who has basic coding experience, but no exposure to Big O Notation. It serves to explain the conceptual basics of Big O in an abstract way.
What's the Big (O) Deal?
Big O Notation is a descriptive tool that is used by developers in order to communicate how well their program handles an increase in input. It can be a measure of how fast a program runs ("time complexity") or how much storage it stakes up ("space complexity").
Essentially, if a program is able to handle an increasingly large amount of input in the same amount of time as a small amount of input, then it will rate well according to Big O Notation.
Measures of Complexity
Suppose we have a function that returns the string "Hello World". This function takes user input in the form of an array. It does not matter if the array that was entered is empty or whether it is thousands of numbers long, the program will still have the same runtime. This is called "constant time" and is considered ideal.
What about a function that performs a task for each item in an array? Perhaps now our same function was altered so that rather than returning "Hello World" once regardless of the input, it returns "Hello World" once for each item in the array that is entered. This type of function has what is called "linear growth". This means that there is a direct correlation between the size of the input and the runtime of our program.
Now let's say that instead of returning "Hello World" for every item in our function, our program has been rewritten to return "Hello World" as many times as the array is long for each element in the array. In this example an array of size 1 would return "Hello World" once, an array of size 2 would return "Hello World" four times, an array of size 3 would return "Hello World" nine times, and so on. This is considered "exponential growth". You can imagine how increasing the input has the capability of drastically impeding the speed of our program.
Why Does It Matter?
Big O Notation is important because we want our code to be "scalable". Scalability is a program's ability to remain efficient when exposed to a large amount of input or data.
So...Does this mean that there is one algorithm used by every program that is the perfect Big O Notation solution that will make our code efficient and scalable?
In short...no.
There is no one-size-fits-all algorithm for writing code. The type of code you write will be dependent on the issue you are solving. Different problems will have different requirements and therefore make use of different methods.
It is important that, as a developer, you be cognizant of your program's complexity and cut out unnecessary bulk so that the code will run as efficiently as possible. Big O Notation is one of the tools you can use to express the complexity of your code.
Resources
- http://www.corejavainterviewquestions.com/idiots-guide-big-o/
- https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
Top comments (0)