What I am missing here is an explanation of why someone would want to convert a recursive solution to an iterative one.
Often a recursive solution is easier to understand and to maintain which can be a huge benefit. On the other hand, if performance (both time and memory) really matters, iterative solutions are practically always superior.
Some compilers have tail recursion optimization, which basically means that they perform this conversion internally and automatically, offering you the best of both worlds.
Well, it may not be needed but I thought it was interesting to know and to share. It may also help someone to understand how some compilers convert some recursive algorithms into iterative ones.
Since your last paragraph can be a little confusing, for anyone that's wondering:
Tail Call Optimization optimizes the iterative function explained in this article, while it's unable to optimize the function in the first example.
Each time a function is called, the program has to allocate memory for all of the parameters used in the functions. These cannot be unallocated until the compiler/interpreter is certain they won't ever be used again.
Normally this means whenever the function is exited, which in the first example first happens after adding "+1" to the return value of the called function.
In the iterative function the returned value won't be modified, so it can deallocate the original function immediately, and only concentrate on the new function call. (I'm paraphrasing a little, optimization might not work exactly like this, and it might do additional optimization, but this is the general gist of the idea behind it)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
What I am missing here is an explanation of why someone would want to convert a recursive solution to an iterative one.
Often a recursive solution is easier to understand and to maintain which can be a huge benefit. On the other hand, if performance (both time and memory) really matters, iterative solutions are practically always superior.
Some compilers have tail recursion optimization, which basically means that they perform this conversion internally and automatically, offering you the best of both worlds.
Well, it may not be needed but I thought it was interesting to know and to share. It may also help someone to understand how some compilers convert some recursive algorithms into iterative ones.
Since your last paragraph can be a little confusing, for anyone that's wondering:
Tail Call Optimization optimizes the iterative function explained in this article, while it's unable to optimize the function in the first example.
Each time a function is called, the program has to allocate memory for all of the parameters used in the functions. These cannot be unallocated until the compiler/interpreter is certain they won't ever be used again.
Normally this means whenever the function is exited, which in the first example first happens after adding "+1" to the return value of the called function.
In the iterative function the returned value won't be modified, so it can deallocate the original function immediately, and only concentrate on the new function call. (I'm paraphrasing a little, optimization might not work exactly like this, and it might do additional optimization, but this is the general gist of the idea behind it)