## DEV Community

steve kimoi

Posted on • Updated on

# 102(b):DEEP DIVE INTO DATA STRUCUTURES AND ALGORITHMS:

Deep dive into Algorithms:
In my first article, I gave a very brief introduction to algorithms, its definition, characteristics, types and their importance. In this article, I’ll give an in-depth explanation.

We will cover:

1. Time complexity
2. Space complexity
3. Asymptotic notations

An algorithm is said to be efficient if it takes less time and consumes less memory. The performance of an algorithm is based on the time complexity and space complexity. Thus, knowing time and space complexity enables us to make the program behave in required optimal conditions. This makes us efficient programmers.

Time complexity:
Time complexity of a program is the total time required by a program to run till its completion. It does not examine the total execution time but rather it gives information about the variation (increase or decrease) in execution time when the number of operations (increase or decrease) in an algorithm.

Time complexity of an algorithm is usually expressed using the big-0 notation. This is an asymptotic notation to represent the time complexity.

Time complexity is estimated by counting the number of steps performed by and algorithm to finish its execution.

A basic example could be:

We are told to find the square of any number, n. One solution can be running a loop for n times, and adding the number n to itself n times:

``````         ```
for i = 1 to n:
n = n + n
return n
```
``````

Or we can simply use the mathematical operator * to find the square:

``````          ```
return n * n
```
``````

For the first solution, the loop will run n number of times, thus, the time complexity will be n at least, and as the value of n increases the time taken also increases.

For the second solution, the time taken will be independent of the value of n. Thus, the time complexity will be constant.

Calculating time complexity:
Disclaimer: I have used the same example as study tonight site.

The most common way of calculating time complexity is by using the big-0 notation. This method removes all the constants, so that the running time can be estimated using n as n approaches infinity.

For example:

``````             ```
statement
```
``````

For the above statement, the running time will not change in relation to n. This will make the time complexity constant.

``````               ```
for x in range(n):
statement
```
``````

The time complexity for the above algorithm will be linear. The running time of the loop will be directly proportional to n. When n increases, so does the running time.

``````               ```
for x in range(n):
for j in range(n):
statement
```
``````

The time complexity for the above code will be quadratic. This is because the running time of the above loops is proportional to the square of n.

Types of notation for time complexity:

The various notations used for time complexity are:

1. Big 0 notation (oh)
2. Big Ω notation (omega)
3. Big theta notation (theta)

1. Big Oh:represents worst case of an algorithm’s time
complexity. A set of functions that grow slower than,
or at the same rate as the expression. It indicates
maximum time required by an algorithm for all input
values.
2. Big Omega: represents best case of an algorithms
time complexity. It is a set of functions that grow
faster than or at the same rate as the expression. It
indicates minimum time required by an algorithm for
all input values.
3. Big theta: represents average case of an
algorithm’s time and space complexity. Consists of
all functions that lie in both O and Omega.

Now let’s understand what space complexity is:

Space complexity:
Space complexity is the amount of memory required by the algorithm during the time of its execution. It includes both auxiliary space and space used by the input. Auxiliary space is the extra/temporary space used by an algorithm.

`Space complexity = auxiliary space + input space`

The following components in an algorithm generally require space:
1. Data space: this is the space required to store all
the constants and variables (including temporary
variables).
2. Instruction space: this is the space required to
store the executable version of the program.
3. Environmental space: this is the space used to store
the environment information used to restore the
suspended function. E.g when a function (x) calls a
function (y), the variables of the function (x) are
stored in the system stack temporarily while
function (y) is being executed inside function (x).

When we are calculating the space complexity, we usually consider data space only and neglect instruction space and environment space.

Calculating the space complexity:
In order to calculate the space complexity, we need to know the amount of memory used by different data variables, although it may be different for different operating systems, the method of calculation is the same.

Let's compute the space complexity using a few examples:

Disclaimer: the examples below are copied from study tonight.

Example 1:

``````            ```
def myFunction():
Int(a) = x + y + z
return a
```
``````

In the example above, the total amount of memory used is ( 4 + 4 + 4 + 4 ) = 20 bytes since the variable types are all integers . The additional 4 bytes is for the return value.

This is a Constant Space Complexity, since the memory required is fixed.

Example 2:

``````              ```
def myFunction():
int( sum( int(arr[]), int(n) ))
int(x) = 0
for i in range(n):
x = x + arr[i]
return x
```
``````

In the above example, 4*n bytes is required for the arr[].
4 bytes each is required for x, n and i

Thus, the total memory required will be (4n + 12), thus the space complexity will be called Linear Space Complexity.

Now that we have a good understanding of time and space complexity, we’re going to look at the standard ways of expressing them.

Asymptotic notations:
These are standard notations that are used to express the time and space required by an algorithm. This is because it's impossible to provide an exact number of time and space required by the algorithm.
There are three main types of Asymptotic notations:

1. Big O notation

2. Theta notation
3. Omega notation

Big O notation:
This usually represents the worst-case scenario of an algorithm. It is also known as the upper bound of the running time of an algorithm. It represents the complexity of your code using algebraic terms.

It tells us that a function will never exceed a specified time for any value of input n.

Example: let us consider a linear search algorithm where we pass through elements inside an array, one after the other, to search for a particular number. Our worst-case scenario will be if the element is located at the end of an array. This leads to a time complexity of n, where n is the total number of the elements. When we use Big O notation, we say that the time complexity will be O(n). This means that the time complexity will never exceed n. This defines the upper bound, which means that it can be less than or equal to n.

Omega notation:
This is used to define the best-case scenario. It defines the lower bound. It indicates the minimum time required for an algorithm.

Theta notation:
Time complexity represented by this notation is the average value or range within which the actual time of execution will be.