#include <stdio.h>
int main()
{
float a, b, c;
a = 0.1;
b = 0.2;
c = a+b;
if (c == (a+b))
{
printf("a + b = c\n");
}
else
{
printf("a + b # c \n");
}
return 0;
}
and received the result: "a + b # c".
And when I changed type of a, b and c to double, the result became right.
Can you explain for me why?
Thanks for your helps.
floating points (floats and doubles) are an approximation, they don't actually hold the exact number you tell them to. Direct comparisons are hardly ever a good idea.
0.1 might not actually be 0.1, but might be 0.0999999999999. 0.1 + 0.2 might be 0.29999999 which does not equal 0.3
You're getting "lucky" with it working with double, but it won't always work with double. You should avoid direct comparisons like this with any floating point type.
Anyway the difference between float and double is that double is larger and therefore has more precision (which is why it's less likely to have rounding errors in this instance)