floating points

Hello again,

The following Code in C:

1
2
3
4
int n = 10000;
float f = 10000.0f;
if (n == f) printf("They're equal.\n");
else printf("They're not equal.\n");


prints "equal", while both of n and f differ in their bit patterns. Why is that? My guess is that in C, when either one of the operands is an integer and the other one is float, the compiler converts the integer to float and then compares them, hence in this case they would both be equal. Is my assumption correct?

Thanks.

the compiler converts the integer to float and then compares them


:) It really just drops off anything <1
Last edited on
It really just drops off anything <1


Can you please elaborate on this? (May sound stupid, but I really need to understand this).

Thanks
Ok. For example you have 1.9. if your (int) this it will become 1. It basically chops off anything on the right hand side of your decimal point. no rounding.
Topic archived. No new replies allowed.