int n = 10000;
float f = 10000.0f;
if (n == f) printf("They're equal.\n");
else printf("They're not equal.\n");
prints "equal", while both of n and f differ in their bit patterns. Why is that? My guess is that in C, when either one of the operands is an integer and the other one is float, the compiler converts the integer to float and then compares them, hence in this case they would both be equal. Is my assumption correct?
Ok. For example you have 1.9. if your (int) this it will become 1. It basically chops off anything on the right hand side of your decimal point. no rounding.