Though it is going to be all about algorithms yet it starts with PROBLEMS.
Yeah! we have problems just like in mathematics, physics and at various stages of life!
And what we do? We try to solve them. In general, any normal person like you follows this approach:
- Understand, what is the problem in actual (Why do I feel like fish? What is the integration of 1/sinx?)
- Then you get some reasons and hurdles. If they are vague you keep on trying finding them, until you get some clear reasons and causes. (I don't know what is integration!, I don't like integration, I don't know the formulas)
- Then you try to remove them by your superhuman strategies. (Let's google the formula!)
- Then you apply these strategies and in return get an outcome. (I am getting wrong answer :( )
- The outcome could be successful based on how successfully you performed the above 3 crucial steps. If not you are back to step 1.(Oh shit! it was a minus sign!)
In all these steps you are trying to build a SOLUTION. A solution to solve a problem. Simple no!
In computer science, while solving computer problems you call this term ALGORITHM. Yeah, it's just like calling a brinjal: an aubergine!!
So algorithms (or solutions) for a computer science problem.
- sorting 1 billion data of people by their first name
- joining two pieces of text
- how many zeroes are in 1000000000000000000?
- Which is less costly lucknow->Noida->Kashmir or Lucknow->Haryana->Kashmir?.....and so on..
Clearly, there are many ways of solving a problem. But we don't want our computer program to run for years and years and that is why we want an algorithm (aka solution) that will run in minimum possible time.
Okay, so how do I know which solution is better than other?
Simple! just calculate the clock time they are taking by running them !
But we have execution times specific to a particular machine( i5 or i3)
Then, just count which has the larger number of statements. More statements more time!!
No, number of statements depends on the programming language and programmer's style
express the running time as a function of input size
Actually, we are interested in knowing that what happens once we increase the input size. The solution works for 100 elements, 1000 elements but what for 1 trillion? Is the execution time the same?
The input size is the number of elements in the input and depending on the problem it could be the size of an array, a polynomial degree, number of elements in a matrix, number of bits in a binary representation or the number of vertices and edges in a graph.
We are checking the effectiveness based on how the change in input size will affect the running time of the algorithm.
Faster algorithms with large input size are something we are looking for!!
And we are glueing these two things with functions.
Algorithm A runnning time f(n): 2n+n-9
Algorthm B running time f(n): 7logn+2n
Algorithm C running time f(n): 3n+6+n^2
Now how we can compare 2n+n-9 with 7logn+2n?
rate of growth
Suppose you are planning to buy a bicycle and house this Christmas. When your friend asks what are you buying, you will say house because the cost of the house is more than the bicycle.
total cost = house + bicycle total cost = house(~approximately)
Okay.. let's take a serious case. Mathematically we know:
f(n) = n^2+2n+6 ~ n^2(approximately) f(n) = 4n+2^n ~ 2^n(approximately)
The other terms
4n do not affect the f(n) much as
n^2 or 2^n when the value of n gets really really high.
Not affecting in the sense that for the larger values of n,
n^2 will become so bigger than
2n and 6 that the change in their values will not have a large impact as
n^2. So approximately we can say that f(n) is equivalent to n^2 for the larger values of n.
f(n) ~ n^2 , n is very large
The rate at which the running time increases as a function of input is called the rate of growth.
100n are the constituent terms of function, then it approximates to n^4 since n^4 is the highest rate of growth.
n^4+2n^2+100n ~ n^4
In practice, you will hardly get any data of 50 or 100 elements. Think about Gmail users, Amazon or Netflix subscribers, youtube and the high-performance computing stuff like simulations and weather forecasting, colleges, offices and so on...
And we need efficient algorithms which are able to process such large input size data
In the next part, I will take this discussion towards Asymptotic Analysis, that we usually see(Big Oh stuff)
Level up every day