# Drawbacks of Recursive Algorithm

###
Dishant Varshney
*Updated on *
ă»6 min read

## Recursion

Even if youâre a newbie in the programming world, you would come across a very famous algorithm called *Recursion*. Itâs a great algorithm especially in cases where you want to solve the big problems by breaking it into smaller yet similar sub-problems.

Recursion is used almost everywhere like in Quick Sort, Merge Sort. Even, itâs the most obvious algorithm anyone would use to solve problems like finding the nth number in Fibonacci Series or finding out the factorial of some integer. This algorithm calls itself again and again while reducing the problem at each call until a base case is met. You just need only two steps to implement this algorithm:

Specify the base case to stop the

*calling*itselfCall the function again but with a smaller yet similar problem

Letâs clarify this with an example: Suppose you want to calculate the factorial of 5.

Factorial of any positive integer is defined as the multiplication of this integer with the factorial of its previous integer

i.e.,Factorial of 5 = 5! = 5 x 4!

4! = 4 x 3!

3! = 3 x 2!

2! = 2 x 1!

1! = 1 x 0!

And 0! = 1

(Why 0! = 1?)

Here, our base case is when the integer is 0 or 1. The algorithm would be like:

```
1.def factorial(x):
2. if x <= 1:
3. return 1 # Step 1: base case
4. else:
5. return x * factorial(x-1) # Step 2: recursively calling
# 'factorial' with smaller argument ~> x-1
6.print(factorial(5))
```

or

```
def factorial(x):
return 1 if x <= 1 else x * factorial(x-1)
```

If you run this function in Python or in any programming language you like, it would output 120.

Itâs simple and sophisticated but it has some limitations. One of the limitations is based on your machineâs computational power. The problem comes in ârecursive callâ. The second limitation is its execution speed. Letâs dig in one by one, shall we?

## Stack

Whenever any function (letâs say A) calls another function (B) then function A at that point is stored in the memory of your machine to execute later after the execution of function B. This is called *stack*. Itâs one of the data structures that ensures the programs run smoothly and without any error.

Suppose if function B calls another function C before its execution is completed, then function B goes at the top of function A in the stack. Similarly, if C calls D then C goes on the top of B (like the macarons in the image).

After the execution of D, the program watches the stack. If the stack is empty, the program gets finished and returns the result to you. But if the stack is not empty then, it executes the topmost function in the stack and after its execution, it throws this function into the garbage, and then it goes to the next topmost function. It continues to go like this until the stack becomes empty.

Letâs take the example of our factorial function: At line 6, we are calling the function `factorial(5)`

then the function `print()`

goes in the stack (which was empty before `print()`

came). Now, in the function, the argument (x) is compared at line 2 which results to be `False`

(since 5 â„ 1). At line 5, `factorial(5)`

calls `factorial(4)`

and `factorial(5)`

goes to the stack, on the top of the `print()`

function. Similarly, `factorial(4)`

goes on top of `factorial(5)`

and so on. In the end, our stack looks like this:

Stack:factorial(2)

factorial(3)

factorial(4)

factorial(5)

print(factorial(5))

When `x = 1`

is passed in the `factorial(x)`

function then line 2 gives `True`

and the function returns 1. This output goes to the function which calls it (or the function at the top of the stack) *i.e.,*`factorial(2)`

. Then, `factorial(2)`

executes its last line (line 5) `2 * factorial(1)`

or `2 * 1`

. The output is going to `factorial(3)`

and its output (`3 * factorial(2)`

or `3 * 2`

) goes to `factorial(4)`

and so on. Finally, `factorial(5)`

returns output 120 to `print()`

function which displays it on the screen.

Now, there is a limit to how many times the computer calls the function and builds the stack. If you execute `print(factorial(int(1e9)))`

, Iâm sure the program throws a *RecursionError* (due to stack overflow). You can also set the recursive limit on the computer. But still, it would take time.

Adding function on the stack and calling it again requires some constant amount of time (O(1)). Although, this amount of time is small (or large depending on the machine) but if this adding-and-calling process happens 1 billion times then, the program takes too much time.

Obviously, no one is going to calculate the factorial of 1 billion in the practical situation but what if *Time Complexity* problem arises with small numbers also? This could happen when the computer does the same calculation again and again. In the Fibonacci series F(5) (F(n) = F(n-1) + F(n-2)), for example, same computation happens multiple times. Look at its recursive algorithm:

```
def fibo(n): # series = 1,1,2,3,5,8,13...
if n <= 2:
return 1 # Step 1: base case
else:
return fibo(n-1) + fibo(n-2) # Step 2
```

or

```
def fibo(n):
return 1 if n <= 2 else fibo(n-1) + fibo(n-2)
```

In this simple solution F(5), already F(3) is called twice and F(2) thrice. Now, if the problem is slightly bigger than that then many of the functions are executed multiple times and this takes time. So, if we are calculating the same thing again and again, why donât we just store it somewhere in the memory?! (also called Dynamic Programming)

## Memoization

Here comes *âMemoizationâ* to the rescue. Yes, you read that right. Thatâs not a misspelled word. Itâs *âmemoizationâ* not *âmemorizationâ*.

In this algorithm, we store the results in an array. For example: if we want to calculate F(4) which is F(3) + F(2), we just need to search the array for F(3) and F(2) and add them. But how do we do that?

Consider the following lines of code:

```
1.def mFibo(n, arr):
2. if arr[n-1] is not None:
3. return arr[n-1]
4. if n == 0 or n == 1: # base case
5. result = 1
6. else:
7. result = mFibo(n-1, arr) + mFibo(n-2, arr)
8. arr[n-1] = result
9. return result
10.
11.def fibo(x):
12. arr = [None] * x
13. return mFibo(x, arr)
```

At line 2, the program searches for the index at the given ânâ. If the value is not `None`

or null then the array arr has the calculated value otherwise it calculates the value (line 5 or 7) and assigns it to the variable `result`

. This value is put in the `arr`

at the specified index `n-1`

. This way it doesnât need to re-calculate the calculated value and the process becomes faster. However, it is still calling itself. Let's remove that factor too.

## Bottom-Up Approach

However, we can also implement another faster algorithm. We can do that by going from bottom to top *i.e.,* we start from F(0) and F(1) and go up to F(n) This is called *bottom-up* approach whereas Recursion uses a *top-down* approach.

Consider the following lines of code:

#### Fibonacci Series

```
def fibo(n):
arr = [None] * n
arr[0], arr[1] = 1, 1 # base case
for i in range(2, n):
arr[i] = arr[i-1] + arr[i-2]
return arr[n-1]
```

#### Factorial

```
def fact(n):
r = 1 # base case
for i in range(1, n+1):
r *= i
return r
```

These algorithms are faster than the Recursion algorithm but canât be implemented everywhere like in sorting the array by Quick Sort or Merge Sort. Also, they are faster at the expense of storage (*Space Complexity*).

So, you need to decide which algorithm you want to use depending on the task you are doing and on Time and Space complexity. If the task is temporary and consists of a small number n (or x) then you can go for recursion. But, if the task contains big numbers like 1,000,000 or re-calculating the same value again and again (like in the knapsack problems) then you want to use the *bottom-up* approach instead of the *top-down* approach.

With tail call optimization and some function composition for memoization you can use recursion without sacrificing performance, these beeing features most functional programming languages support. The bottom-up approach mutates variables which is a big deal breaker for me

Agreed. That mutating state is like a honeypot for bugs.

tail call optimized Recursive and corecursive functions are also closer to a pure mathematical notation...

With a proper type system and pattern matching, you can even remove most if-statements in your code and decompose your functions down into easier-to-test chunks.

I was wondering how the functional languages try to solve this problem since their use only recursion when in need of loops.