I'm studying C++ and I wrote down a simple code to evaluate the factorial of a number using the recursivity property of functions.
I can't understand why it works fine for number smaller than 65 and for bigger ones it returns always 0.
#include <iostream>
usingnamespace std;
long factorial (long a)
{
if (a>1)
{
return (a=a*factorial(a-1));
}
else
{
return (1);
}
}
int main ()
{
long num;
cout << "Digita il numero di cui vuoi calcolare il fattoriale: ";
cin >> num;
cout << num << "! = " << factorial (num) << "\n";
return 0;
}
Isn't it much more efficient to write a factorial program with a loop, instead of recursively. Like, I can imagine that for extremely large numbers, the program will begin to work very slowly.
Here's the problem. OP just assumed that his program was returning correct results (when for obvious reasons, it wasn't). 21! overflows 64-bit integers, but if you're not checking your results you'll only see a list of large numbers. For example, 30! modulo 2^64 is 9682165104862298112, but 30! is 265252859812191058636308480000000.
But, 66! just happens to be divisible by 2^64, so no matter what, at this point you'd find that your integer turns up zero.
Here's a fun idea: try proving that ∀i∈N0 (2^i+2)! is divisible by 2^2^i.
Your post is quite intersting to me as I'm going to try to understand if the code is correct or not.
You said that for obvious reasons it is wrong, but they are not obvious at all to me, could you please explain me better the problem?
Isn't it much more efficient to write a factorial program with a loop, instead of recursively. Like, I can imagine that for extremely large numbers, the program will begin to work very slowly.
No, it is not. The complexity is linear in both cases.
Resident Biscuit wrote:
(using longdouble) won't help at all. He doesn't need floating point at all. A lot of the space used in doubles is reserved for the mantissa.
¿what's your point?
All that matters is that you can represent bigger numbers with longdouble
@OP: your code is fine. You run out of fingers to count, that's all
You said that for obvious reasons it is wrong, but they are not obvious at all to me, could you please explain me better the problem?
It's explained in the sentence immediately after that.
ne555 wrote:
The complexity is linear in both cases.
I want to say that the space complexity of the iterative version is constant while the space complexity of the recursive version is linear, but the cost of multiplying rises much faster than the cost of recursing.