I am working on a code and I see something I do not expect. The code essentially does a great deal of number crunching. The code is deterministic. There is no iterative or stochastic component. The number crunching gets done on a data structure of the size n.
Now, I expect that for a given n, the cost of running the code should be the same. However, I see that it varies greatly with the value of numbers in the data structure. As the numbers in the data structures get large (around 10^14) or very small (10^-14) the code is almost 4 times slower than the case when the numbers are reasonably well behaved (in the range 10^-3 to 10^3). I have now tested this on two different computers and found identical results.
Any suggestions as to to why this is the case ? The code is compiled with g++ -O2.