The different between type double and type float

Apr 15, 2010 at 2:47pm
I tried this following code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#include <stdio.h>

int main()
{
    float a, b, c;
    
    a = 0.1;
    b = 0.2;
    c = a+b;
    
    if (c == (a+b))
    {
        printf("a + b = c\n");
    }
    else
    {
        printf("a + b # c \n");
    }
    
    return 0;
}

and received the result: "a + b # c".
And when I changed type of a, b and c to double, the result became right.
Can you explain for me why?
Thanks for your helps.
Last edited on Apr 15, 2010 at 2:50pm
Apr 15, 2010 at 3:05pm
floating points (floats and doubles) are an approximation, they don't actually hold the exact number you tell them to. Direct comparisons are hardly ever a good idea.

0.1 might not actually be 0.1, but might be 0.0999999999999. 0.1 + 0.2 might be 0.29999999 which does not equal 0.3

You're getting "lucky" with it working with double, but it won't always work with double. You should avoid direct comparisons like this with any floating point type.

Anyway the difference between float and double is that double is larger and therefore has more precision (which is why it's less likely to have rounding errors in this instance)


EDIT:

more reading:

http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.17
Last edited on Apr 15, 2010 at 3:06pm
Apr 15, 2010 at 3:35pm
Thank you very much.
Topic archived. No new replies allowed.