Bear in mind that floating point aritmetic is much different from integer arithmetic. Multiplication and division on floating point is a much different process than it is for integer.
1 2
|
int foo = 5;
foo = foo * 0.2;
|
This converts 'foo' to a double (not as straightforward as you might think), multiplies, then converts back to an int. AFAIK, this would be less efficient than / 5 because of the 2x conversions.
On the other hand... if you cut the conversion out:
1 2
|
double foo = 5;
foo = foo * 0.2;
|
This is probably exactly the same as foo / 5. And even if you do /5 and it turns out the be slower, the compiler could optimize it and replace the /5 with *0.2. So it's not something worth stressing about.
As has already been said -- just do whatever is easiest and most straightforward to understand. Let the compiler do the optimizing. No need to micromanage. Often times you'll end up shooting yourself in the foot.
Case in point: Way back when multiplication was really slow and games had to multiply by 320 (for pixel plotting with a screen width of 320). Coders often did (y << 8) + (y << 6) instead of (y * 320).
Later, once multiplication on CPUs got faster, the << "optimization" actually turned out to be slower.
These days -- computer processing is so freaking fast, it hardly matters, as the program speed is much more likely to (read: virtually always going to) bottleneck elsewhere -- like memory accessing.