Better still, properly cast it. |
There's no need to cast it to a double/float if he wants an integer. (edit: at least I didn't think so, but I might be wrong)
Casting to a double/float, then back to an integer is more expensive than keeping it an integer the whole time.
Doing the multiplication first might give you the correct answer, but I would say that in general this is bad programming practice. |
I would say the exact opposite. (usually)
EDIT:
Using only added a few 0's to the decimal place and didn't change the result. |
You're casting
after the division is performed so you still get truncated. To do what xkcd83 is suggesting you'd have to cast one or both of the values before the division takes place.
Disch's way does work and I don't really see a reason why I shouldn't use it that way? What makes it bad practice? |
The downsides to the approach I recommended are:
1) readability (subjective, but some people might think it's harder to read. Personally I think it's easier to read than casts)
2) Risk for "overflowing". If "ticks" is >= 596524, you will exceed the capacity for a 32-bit signed integer and will get an incorrect result.
So yeah if "ticks" is going to be a higher value, you might want to consider the casting approach instead. I had assumed this wasn't the case because you kept getting 0, so I thought the numbers were always small.