DEV Community

Avinash Vagh
Avinash Vagh

Posted on • Updated on

Data Structures and Algorithms with JavaScript

PART-1

Introduction to Big O Notation

Getting started with Big O Notation using Linear Search and Binary Search examples

Introduction

Big O Notation is a way to measure an algorithm’s efficiency. It measures the time it takes to run your function as the input grows. Or in other words, how well does the function scale.


~ Big O Notation is a way to formalize fuzzy counting
~ It allows us to talk formally about how the runtime of an algorithm grows as the input grows
~ we say that an algorithm is O(f(n)) if the number of simple operations the comuputer has to do is eventually less than a constant times f(n), as n increases

  • f(n) could be linear (f(n)=n)

  • f(n) could be quadratic (f(n)=n2)

  • f(n) could be constant (f(n)=1)

  • f(n) could be something entirely different!



There are two parts to measuring efficiency — time complexity and space complexity. Time complexity is a measure of how long the function takes to run in terms of its computational steps. Space complexity has to do with the amount of memory used by the function. This blog will illustrate time complexity with two search algorithms.


~ Simplyfing Big O Expression
~ Constants Don't Matter

  • O(2n) // O(n)
  • O(500) // O(1)
  • O(13n2) // O(n2)

~ Smaller Terms Don't Matter

  • O(n+10) // O(n)
  • O(1000n+50) // O(n)
  • O(n2+ 5n +8 // O(n2)



Big O is sometimes referred to as the algorithm’s upper bound, meaning that it deals with the worst-case scenario. The best-case scenario doesn’t really tell us anything — it will be finding our item in the first pass. We use worst-case to remove uncertainty — the algorithm will never perform worse than we expect.

Space Complexity

So far, we've focusing on time complexity: how can we analyze the runtime of an algorithm as the size of the input increases?
Sometimes you'll hear the term auxilary space complexity to refer to space required by the algorithm, not including space taken up by the inputs.
technically, we'll be talking about auxillary space complexity.

Space Complexity in JS

  • Most primitives (booleans, numbers, undefined, null) are constant space
  • Strings require O(n) space (where n is the string length)
  • Reference types are generally O(n), where n is the length (for arrays) or the number of keys(for objects) ## Logarithm Compexity
  • certain searching algorithmhave logarithmic time complexity
  • efficient sorting algorithms involve logarithms.
  • Recursion sometimes involves logarithmic space complexity

Summary

  • To analyze the performance of an algorithm, we use Big O Notation
  • Big O Notation can give us a high level understanding of the time or space complexity of an algorithm
  • Big O Notation doesn't care about precision, only about generals trends(linear? quardratic? constant?)
  • The time or space complexity (as measured by Big O) depends only on the algorithm, not the hardware use to run the algorithm

Linear Search Example

Let’s look at a simple example.
We will search for the number eight out of a range from 1–8.
Our first strategy is to start with the first number of the array and move up by one until we find our target number. Our algorithm steps would look something like this.

  1. Start at the beginning
  2. Compare the current value to our target
  3. Move to the next value
  4. Reach the end of the list

Linear Search


In round 1, we choose number one; that’s not correct, so we move to round 2 and eliminate number one as an option. Step 2 is also not correct, and we continue all the way until we select number eight.


In our worst-case scenario, this is not very efficient. We have to check every single number in the list until we get to our answer. This method is called Linear Search. The Big O notation for Linear Search is O(N). The complexity is directly related to the size of the inputs — the algorithm takes an additional step for each additional data element.


def linear_search(arr, x):  #input array and target

    for i in range(len(arr)): 
          if arr[i] == x: 
            return i 

    return -1   
# return -1 if target is not in the array
Enter fullscreen mode Exit fullscreen mode

Binary Search Example

Let’s try a different strategy. The critical part of this strategy is that the list must be in order.


This time we start in the middle of the list. We check to see whether we have chosen the target number. If not, we check whether the number is smaller or larger than our choice. In either case, we eliminate half of the list that doesn’t include our target. Then we select a number in the middle of the remaining values.

  1. Find the middle of the list — the midpoint
  2. Compare the midpoint to the target
  3. If our value and the target match we stop — we’ve found our target
  4. If our value is smaller than the target, we make a new list ranging from the midpoint plus one to the largest value.
  5. If our value is larger than the target, we make a new list ranging from the smallest value to the midpoint minus one.
  6. Repeat until we find the target or reach that last element, and it does not match the target.

Binary Search


The midpoint for our first choice is four. The target is greater than four, so we exclude values four and below and create a new list from five to eight. We pick a midpoint value of six and choose again. We continue until only number eight remains.


This strategy is called Binary Search. It is efficient because it eliminates half of the list in each pass. If we were to double our search and look for number 16 in a range of 1–16, it would take us only one more round. Amazingly, if we were to pick a number between one and a million, the worst case would be 20 guesses.


When we get to large numbers like this, we can clearly see how much more efficient this method is than using linear search: 20 guesses vs. 1,000,000 guesses.


We can model this mathematically. The number eight is equivalent to 2 x 2 x 2. We can also express this equation using exponents: two to the power of three, or 2³.


The inverse of an exponent is a logarithm. Log to the base 2 of 8 means how many times do you have multiply two by itself to get eight. As illustrated above 2 x 2 x 2 =8, so Log2 8 = 3.


In general, the worst-case scenario of a Binary Search is Log of n + 1. The Big O notation for Binary Search is O(log N). In contrast to O(N) which takes an additional step for each data element, O(log N) means that the algorithm takes an additional step each time the data doubles.


def binary_search3(arr, target):
    left = 0
    right = len(arr) - 1
while left <= right:
        mid_point = (left + right) // 2    

        if target < arr[mid_point]:
            right = mid_point - 1
        elif target > arr[mid_point]:
            left = mid_point + 1
        else:
            return mid_point
return -1
Enter fullscreen mode Exit fullscreen mode



Note: Even though Binary Search is far more efficient than Linear Search, you wouldn’t always choose it in every example. The reason for this is the very first qualification. Binary Search requires that the list is in order. Sorting a list introduces its own complexity — so in some cases, it may be more efficient to use Linear Search rather than sorting the list first.

Writing Big O Notation

When we write Big O notation, we look for the fastest-growing term as the input gets larger and larger. We can simplify the equation by dropping constants and any non-dominant terms. For example, O(2N) becomes O(N), and O(N² + N + 1000) becomes O(N²).


Binary Search is O(log N) which is less complex than Linear Search. There are many more complex algorithms. A common example of a quadratic algorithm or O(N²) is a nested for loop. In a nested loop, we iterate through the entire data in an outer loop. Then for each element, we iterate through the data in an inner loop. This is N x N times of N².

Complexity


Here’s how these Big O runtimes look on a chart. The examples in this blog are two of the least complex notations.
Big O Chart

Summary

There are often many choices of writing code to get to a solution. Understanding Big O notation is important in writing algorithms. It helps you determine when your algorithm is getting faster or slower. You can also compare different methods and choose the most efficient.

Additional Resources

Treehouse via freeCodeCamp has a great YouTube course to take you from beginner to understanding Big O notation. Warning it is over five hours long. Treehouse Tutorial

Big O Cheatsheet

Note: I've taken reference from the blog on the Internet.
Credit: Andrew Jamieson+ My Research

keep going on, It will make sense
Thank you for reading :))


Subscribe to my YouTube channel, where I share remote job-hunting tips and opportunities → https://youtube.com/@AvinashVagh

Get Globally Full Remote Job Hiring Company’s details in your inbox | Remote Profile Newsletter → https://remoteprofile.beehiiv.com/subscribe


If you like my Blog, you can connect with me on LinkedIn & follow me on Twitter & can check out my Hashnode Blog

Top comments (2)

Collapse
 
safetyvest2789 profile image
Safety Vest 2789

Someone pointed me to this as I was looking for material to start learning Algorithms. Thanks for making this article!

Collapse
 
avinashvagh profile image
Avinash Vagh

I'm happy to know that this article help you. I appreciate your comment. My next suggestion for you to take Colt-Steele's Udemy course on DSA with JavaScript. It will be good starting point for you to understand basics. & you can check out this youtube playlist which I personaly follow. youtube.com/playlist?list=PLNdhK4F...