I was thinking about how we test the "prime-ness" of a number, and it occurred to me that as I'm checking to see if a number is divisible by the numbers less then it (greater then 1), I ignore checking other prime numbers. (Also you only need to check up to half way.) Like so:
This lead me to believe that maybe I could implement an isPrime function with better performance using this technique.
1 2 3 4 5 6 7 8 9 10
//verson A, without the technique
bool isPrime(int n) {
if( n <= 1 )
returnfalse;
int half = n / 2;
for( int i = 2; i <= half; i++ )
if( n % i == 0 )
returnfalse;
returntrue;
}
1 2 3 4 5 6 7 8 9 10 11
//verson B, with the technique
bool isPrime(int n) {
if( n <= 1 )
returnfalse;
int half = n / 2;
for( int i = 2; i <= half; i++ )
//use a recursive call to check for prime-ness first
if( isPrime(i) && n % i == 0 )
returnfalse;
returntrue;
}
However I then though, well I only use this shortcut because I can remember which (lower) numbers are prime while on a computer recursive calls can take up a lot of memory, and it's probably harder for me to determine if n % i == 0 then it is for a computer. So which method is better?
But the same goes for multiples of 3, 5, 7, 11, ..., pn hence the recursive isPrime call.
Is there an easier (faster) way to jump numbers then to check if it's prime (like the i += 2 idea?)
If you only need the fist couple of say.. thousand primes, its probably fastest to just pre-calculate them, hold them in a big array and do a binary search. (If your test-input is random, do a quick %2, %3, %5... - check before the lookup with the number of checks approximately equal to the logarithm of the size of your pre-calculated buffer).
If you want to test for very large numbers (e.g. used in cryptography applications) and you are really concerned about speed, take a look at common math libraries and their bigint - classes (or equivalents). There are usually "ProbablyPrime()" - functions that are much much much ... much faster for very large prime numbers drawn out of random numbers1), but which may fail in a neglectable2) number of cases
1) Some of the algorithms won't work if the number under test is not random, but may come from "mallicious" source. Beware of this, especially in cryptographic applications!
2) "neglectable" as in "a much smaller probability as that a random bit in your memory suddenly flips because of RAM-malfunction and you get a wrong result in your other algorithm too" ;-)
Actually I'm taking a number theory course, and we have talked a bit about such functions.
Do you have any links I should look at? If not I'll just google it.
...As to not get away from the topic, I was strictly looking at how to optimized the first function above (moreover, will the recursive call optimize the function in any case); and I though about storing primes in an array, however I don't have any preconditions about the size of the input value; and also, would construction of the array within the function (to avoid holding onto memory) be faster then not using one?
The point of the array of primes is to compute them only once. Making it local (non-static) to the function defeats that purpose. It is still better (or at least not worse) to store the first N primes in an array, even if the size of the input is unbounded.