Suppose you and your friend were shuffling two decks of cards. Now, another friend who saw you two, challenged both of you to sort the cards based on their order of power and declared that the one who can do it the fastest wins and gets a treat from whoever loses.
So, both of you start going through your cards one by one. You put one card aside in your right hand and check if the card at the top of the pit, on the left hand, is smaller or greater than that card. If it is smaller, you put it below that card on the right hand, and if it is larger, you put it above that card. When you have three cards in the right hand, you again check if the card on top of the pit is smaller or larger than those three cards. Thus, you have to check each card with - each other card each time you sort a card.
Your friend, on the other hand, being smarter than you, simply lays down all the cards and starts arranging them by order without comparing each card. And after one minute, when you are nowhere near even halfway sorting through the cards, your friend has completed the task and won the challenge - as you are left with an unshuffled deck of cards and a treat to give.
Big O notation, put in simple words, is the cost of an algorithm designed to achieve a certain function. The foundation of Big O notation can be explained as the complexity encountered by a conditional loop and the time and memory it took to run through that piece of code inside the loop. German mathematicians first discovered Big O notation and it was made familiar to the world of computer science by the famous Donald Ervin Knuth who was a former professor emeritus at Stanford University. It is expressed by the symbol O(n) where n is a variable that depends on the performance of an algorithm with added complexities.
Now, back to the card-deck sorting challenge - the strategy that you chose to arrange cards can be said to have a Big O notation of n2 or O(n2). Here, n2 in programming terms suggests that the time and memory consumed by a CPU would increase by the square of the number of inputs. Meaning, the more cards you have in a deck, the amount of time it will take you to completely sort it will be a square function of the number of cards in the deck - if you took a constant amount of time to check and place each card in its position. This can be compared to insertion sort.
let unsortedDeck = [4, 10, 1, 7, 9, 3, 8, 2, 12, 11, 13, 5, 6];
let n = unsortedDeck.length;
function yourSort(unsortedDeck, n)
{
let i, currentCard, j;
for (i = 1; i < n; i++)
{
currentCard = unsortedDeck[i];
j = i - 1;
//j denotes to the previous card of currentCard
/* Move cards of unsortedDeck[0..i-1], that are
greater than the currentCard, to one position ahead
of their current position */
while (j >= 0 && unsortedDeck[j] > currentCard)
//if previous card greater than currentCard
{
unsortedDeck[j + 1] = unsortedDeck[j];
//move one position ahead if greater
j = j - 1;
}
unsortedDeck[j + 1] = currentCard;
}
}
//unsortedDeck is now sorted
function printArray(unsortedDeck, n)
//function to print all cards
{
let i;
for (i = 0; i < n; i++)
document.write(unsortedDeck[i] + " ");
}
//run functions
yourSort(unsortedDeck, n);
printArray(unsortedDeck, n);
Again, the strategy that your friend chose would have a notation of n or O(n), meaning, it would take him only a certain amount of time which should be much less than that of yours as the deck increased in size since he used a linear function.
let unsortedDeck = [4, 10, 1, 7, 9, 3, 8, 2, 12, 11, 13, 5, 6];
let n = unsortedDeck.length;
let sortedDeck = [];
function yourFriendsSort(unsortedDeck, n)
{
sortedDeck = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13];
/*since your friend can see and arrange cards
without having to compare cards as he
laid down all the cards for a better view*/
}
function printArray(sortedDeck, n)
//function to print all cards
{
let i;
for (i = 0; i < n; i++)
document.write(sortedDeck[i] + " ");
}
//run functions
yourFriendsSort(unsortedDeck, n);
printArray(sortedDeck, n);
But if it was the case that there were only three cards in the entire deck (a near best-case scenario), it could’ve been a tight contest as you had similar chances of doing it faster than your friend did.
Another thing to remember is that sometimes programmers declare and assign variables inside a loop while it could've been globally declared out of the loop and assigned inside the loop; therefore which causes the variable to declared a hundred or even a thousand times - as many times as the loop runs. And while it doesn't cost a time delay in most cases, it's a bad practice and should be avoided.
Big O notation lets programmers identify how slow or fast an algorithm will be in the best-case and worst-case scenarios. As an enterprise-level software needs to scale up, Big O notation helps maintain a constant speed to provide the same results to millions of users rather than slowing down as the number of users increases.
Refer to the Big-O Cheat Sheet and this Medium article to know about the Big O notation of common data structure operations so you can use the suitable conditionals and create the most efficient algorithms.
Top comments (0)