The following code is an attempt to test an inputted proth number as prime or not.
line 6 is simply for testing purposes to see what the program is outputting although it will not be in the final code. The main problem is that the numbers tested get too big too quick and by the time I'm trying to test prothnumber==b==25 its no longer calculating correctly. What's real strange here is that, say looking at b==33 there's a max out at result==2147483648 which is very apparent as a computing limitation. But with b==25 it gives result==244140625 which means that it simply didn't add the one (which it's doing fine everywhere else).
The other thing is that its giving: [Warning] converting to 'long it' from 'double'.
I'm guessing this is because int b is coming from an array. Anyways, any ideas on how to get the computer to not max out so quickly would be greatly appreciated.
1 void prothprime(int b){
2 long base, result;
3 int ppbrkr=0;
4 for (base=2; base<=9 & ppbrkr==0; base++){
5 result=(pow(base, ((b-1)/2))+1);
6 cout << "base=" << base << ", result=" << result << "\n";
7 if (result%b==0) cout << "Proth Prime\n", ppbrkr=1;
8 }
9 if (ppbrkr==0) cout << "Proth Composite\n";
10 }
As a matter of design, the function should return a bool, not perform I/O operations.
Nope. That's not strange at all. 2147483647 is the limit for signed 32-bit integers (2^31-1).
You have two choices here: a) use an unsigned long long, which is has twice as many bits as long (unsigned __int64's limit is 18,446,744,073,709,551,615), or b) use an arbitrary precision library. I recommend b.
pow() takes two doubles and returns a double. I'm not sure if there's an overload for long doubles, which you will need to perform accurate calculations near the limits of the data types if you'll use long longs.
Thanks, I'll look into the arbitrary precision library - may ask about it later if I get confused.
One more thing tough: Any idea why, when testing the number 25, when it gets to having base==5 it fails to add the one?