bad precision when dividing doubles

Hi, i'm making histograms by dividing my range in a given number of bins and i've stumbled upon a problem that if it has no explanation, then i think is VERY bad. Below i post my code which will just calculate the bin limits for a given number of bins and a range which goes from x_min to x_max. And below it output with a high precision. The problem is that the precision of the output is unacceptably bad for only some of the iterations as displayed below the code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#include <iostream>
#include <iomanip>
using namespace std;

int main()
{
  double range=0;
  double n=25;
  double xmin=0;
  double xmax=50;
  double i;
  double f1;
  double f2;
  double faux1,faux2;
  double aux1;
  double aux2;
  double aux3;

  for (i = 0; i <= n; i++)
    {
      aux1= (n-i);
      aux2= n;
      aux3= i;
      faux1 = (aux1 / aux2);
      faux2 = (aux3 / aux2);
      f1 = ( (n-i) /  n);
      f2 = ( i /  n);
      range = f1 * xmin +  f2 * xmax;
      cout<<setprecision(20)<<aux1 <<" "<<aux2<<" "<<aux3<<" "<<faux1<<" "<<f1<<" "<<faux2<<" "<<f2<<" "<<range<<"\n";
    }

output of the code is
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
25 25 0 1 1 0 0 0
24 25 1 0.95999999999999996447 0.95999999999999996447 0.040000000000000000833 0.040000000000000000833 2
23 25 2 0.92000000000000003997 0.92000000000000003997 0.080000000000000001665 0.080000000000000001665 4
22 25 3 0.88000000000000000444 0.88000000000000000444 0.11999999999999999556 0.11999999999999999556 6
21 25 4 0.83999999999999996891 0.83999999999999996891 0.16000000000000000333 0.16000000000000000333 8
20 25 5 0.80000000000000004441 0.80000000000000004441 0.2000000000000000111 0.2000000000000000111 10
19 25 6 0.76000000000000000888 0.76000000000000000888 0.23999999999999999112 0.23999999999999999112 12
18 25 7 0.71999999999999997335 0.71999999999999997335 0.28000000000000002665 0.28000000000000002665 14.000000000000001776
17 25 8 0.68000000000000004885 0.68000000000000004885 0.32000000000000000666 0.32000000000000000666 16
16 25 9 0.64000000000000001332 0.64000000000000001332 0.35999999999999998668 0.35999999999999998668 18
15 25 10 0.5999999999999999778 0.5999999999999999778 0.4000000000000000222 0.4000000000000000222 20
14 25 11 0.56000000000000005329 0.56000000000000005329 0.44000000000000000222 0.44000000000000000222 22
13 25 12 0.52000000000000001776 0.52000000000000001776 0.47999999999999998224 0.47999999999999998224 24
12 25 13 0.47999999999999998224 0.47999999999999998224 0.52000000000000001776 0.52000000000000001776 26
11 25 14 0.44000000000000000222 0.44000000000000000222 0.56000000000000005329 0.56000000000000005329 28.000000000000003553
10 25 15 0.4000000000000000222 0.4000000000000000222 0.5999999999999999778 0.5999999999999999778 30
9 25 16 0.35999999999999998668 0.35999999999999998668 0.64000000000000001332 0.64000000000000001332 32
8 25 17 0.32000000000000000666 0.32000000000000000666 0.68000000000000004885 0.68000000000000004885 34
7 25 18 0.28000000000000002665 0.28000000000000002665 0.71999999999999997335 0.71999999999999997335 36
6 25 19 0.23999999999999999112 0.23999999999999999112 0.76000000000000000888 0.76000000000000000888 38
5 25 20 0.2000000000000000111 0.2000000000000000111 0.80000000000000004441 0.80000000000000004441 40
4 25 21 0.16000000000000000333 0.16000000000000000333 0.83999999999999996891 0.83999999999999996891 42
3 25 22 0.11999999999999999556 0.11999999999999999556 0.88000000000000000444 0.88000000000000000444 44
2 25 23 0.080000000000000001665 0.080000000000000001665 0.92000000000000003997 0.92000000000000003997 46
1 25 24 0.040000000000000000833 0.040000000000000000833 0.95999999999999996447 0.95999999999999996447 48
0 25 25 0 0 1 1 50

As you see the value of the variable range when it should be 14 (i=7) and 28 (i=14) is quite off. I found first the problem when running histogram functions from GSL, but finally traced it back to just a matter of dividing doubles. My machine is running ubuntu 9.10 (64-bit). I've tried also compiling it on a MAC machine and the error is consistently exactly the same as shown above every time the program is run. Also, if the doubles are changed to floats the error appears when range should be 30 (i=15). Please let me know if this s a known issue and how could it be solved. Should you need more information, please just let me know, but i think if you copy the code you'd be able to reproduce the problem without any complication.
Thanks in advance.
Thanks for the reply and the article refered above. I am aware that there are precision issues that we have to live with, so actually i am not concerned about the error in fauxi and fi variables. What i don't get here is why the values of range are rounded correctly for most of the cases except for when it should be 14 and 28. Would then these just be worst case precision scenarios?
Topic archived. No new replies allowed.