Thank you for your reply! Could you please explain what you're doing? I tried looking it up but haven't found it yet. is (int) (...) basically just: int(...) which converts ... to int type?
(int) just effects a "cast", i.e. a change of type, in this case to integer.
When changing type to integer you truncate toward zero, not round. Thus, thinking about the decimal places involved you would be doing the same as rounding if you added 0.5 first.
(int) just effects a "cast", i.e. a change of type, in this case to integer.
When changing type to integer you truncate toward zero, not round. Thus, thinking about the decimal places involved you would be doing the same as rounding if you added 0.5 first.
Ahh okay. I see. So anything < num.5 will equal num when converted. While anything >= 5 will turn into num+1.# and when converted to int it is num+1.
Examples (Note to self):
3.3 + 0.5 = 3.8 (double converted to int = 3)
3.5 + 0.5 = 4.0 (double converted to int = 4)
3.7 + 0.5 = 4.2 (double converted to int = 4)
Apparently it's only one pathological case though... in practical terms this is not an issue, and doing +0.5 is fine. But for something that requires the utmost accuracy, it is something to watch out for.
Apparently it's only one pathological case though... in practical terms this is not an issue, and doing +0.5 is fine. But for something that requires the utmost accuracy, it is something to watch out for.