Subtracting a 0.1

so this is my code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#include <iostream>
#include <iomanip>
using namespace std;

int main()
{
  double height, delta = 0.1;
  int count = 1;
  cout<<"Enter starting height: ";
  cin>>height;

  while (height >= 0)
  {
    cout << count << "\t";
    cout << setiosflags(ios::showpoint)<<setprecision(1)
         << setiosflags(ios::fixed)<< height << endl;
      height = height - delta;
      count++;
  }

  cout << count << "\t" << height<< endl;
  cout << "Loop is done\n";
  cout << endl;
  return 0;
}



they're supposed to end at the same point for all values but don't and I can't figure out why
I get something like this
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Enter starting height: 2
1       2.0
2       1.9
3       1.8
4       1.7
5       1.6
6       1.5
7       1.4
8       1.3
9       1.2
10      1.1
11      1.0
12      0.9
13      0.8
14      0.7
15      0.6
16      0.5
17      0.4
18      0.3
19      0.2
20      0.1
21      -0.0
Loop is done


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
Enter starting height: 5
1       5.0
2       4.9
3       4.8
4       4.7
5       4.6
6       4.5
7       4.4
8       4.3
9       4.2
10      4.1
11      4.0
12      3.9
13      3.8
14      3.7
15      3.6
16      3.5
17      3.4
18      3.3
19      3.2
20      3.1
21      3.0
22      2.9
23      2.8
24      2.7
25      2.6
26      2.5
27      2.4
28      2.3
29      2.2
30      2.1
31      2.0
32      1.9
33      1.8
34      1.7
35      1.6
36      1.5
37      1.4
38      1.3
39      1.2
40      1.1
41      1.0
42      0.9
43      0.8
44      0.7
45      0.6
46      0.5
47      0.4
48      0.3
49      0.2
50      0.1
51      0.0
52      -0.1
Loop is done


They end up with different values even though they should end up the same
Last edited on
This has something to do with the way that the numbers are being subtracted. Change setprecision(1) to setprecision(15) and you can see that it is not subtracting exactly 0.1, but instead a number very close to it.

When 2 is entered, it ends like this:
1
2
20     0.099999999999999
21     -0.000000000000001 
This rounds to -0.0

When 5 is entered, it ends like this:
1
2
51     0.000000000000001
52     -0.099999999999999 
This rounds to -0.1

Sorry, I do not why it does this or how to fix it, but I am interested to see what the answer is.

Edit: Actually, I found this: http://docs.sun.com/source/806-3568/ncg_goldberg.html, a paper called What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg, which may shed some light on this situation.
Last edited on
Yeah, I got what the problem was by doing that.
Then I tried searching around on ways to fix it but no luck, which is why I decided to ask here
Topic archived. No new replies allowed.