Complexity of Recursion

I watched videos about the time and space complexity of the recursive function but I can't find why it important and what is its purpose, most importantly what's the core reason for its existence?
I do know that it is used because when the program has a large value entered. For example, we take this code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
int Fib(int n)
{
if (n<=1)
{
return n
}
return Fib(n-1)+Fib(n-2);
}
int main()
{
int n;
cout << "Number N"<<endl;
in>>n;
int result = Fib(n);
cout << result;
return 0;
}

This program will take more time when entered a larger value. And it has to go to the end of each recursion tree and return with the value each time. Therefore it take more time and space. But if we save the value that has been found then it won't go to each end every time. by adding
1
2
3
4
5
if(F[n]!= -1)
    {
        return F[n];
    }
 
and for loop in the int main.
I am confused about this topic. I am sorry, if I don't ask the proper question or couldn't explain what I want to know about it.


Thank a bunch.
ColBosky
It can be useful when comparing different algorithms.
In your first code, you have a Fib function. For some n > 1, the function calls itself not once, but two times.

Jot down the logic on paper, something like:


                                                   / Fib(1)
                                          Fib(2)
                                      /            \  Fib(0)
                             Fib(3)
                          /           \   Fib(1)

                 Fib(4)     
                         \            /    Fib(1)          
         /                    Fib(2)
                                      \    Fib(0)
Fib(5)   
             
         \                                /  Fib(1)
                         /      Fib(2)
                                          \ Fib(0)
                Fib(3)
                         \     Fib(1)



If you have some n, then what ends up happening is you have a bunch of splitting branches, and you end up making around 2^n function calls, which is an awful complexity and blows up really quickly (2^5 = 32, 2^8 = 255, 2^16 = 65535, etc). (Note: It's not exactly 2^n, but it grows exponentially, asymptotically to 2^n.)
Edit: Actually it's apparently asymptotic to n*2^(n/2) but the same concept applies.

Notice how, for example, Fib(3) is computed 2 times, each independently of one another, so redundant calculations are happening.

What you refer to in your second example is called "memoization". See:
http://courses.csail.mit.edu/6.006/fall09/lecture_notes/lecture18.pdf
https://medium.com/@porzingod/fibonacci-and-memoization-e99f765b97f6

Saving the value of Fib(some lower n) prevents you from having to make more recursive, exponentially increasing function calls.
Last edited on
complexity analysis is a useful tool to get a general sense of how efficient code is.
as you noted, you can make the code better.
but lets look at that idea a second time: fib numbers will only allow a whopping 50 or so in a 32 bit integer, and the number for a 64 bit int isnt that many more. If you wanted the darn things fast, you would grab a list of them off the web and put them into a constant array such that
fib[nth_number] gives you the value with no computation at all. The same is true for factorials and other things that get huge fast (so there are only a few of them in normal computing realm of 64 bit integers etc, making a table becomes the way to go).

So ignore the horribad code here in terms of 'practical usage'. It isnt practical. However, complexity analysis IS practical, so the thing to learn here is how to do that analysis and understand what sorts of things to watch out for when coding your own algorithms, how to understand what they will do with large input (whatever that means for the code at hand).

Topic archived. No new replies allowed.