Floats have 8 significant figures of precision, while doubles have 16.
Does the first example default to a double because there is no suffix?
No, it's type is float.
I always use doubles, and would only use floats if I wasn't worried about the lesser precision and I had a million of them so memory space would be an issue.
float number = 1.5;
number is a float and is initialized with the value of 1.5 converted into a float.
I am inclined to always use 10.0 instead of 10f - however that is user preference.
I think it's more to it than user preference. Floats has less precision so the expression could give you a less precise answer. The conversion from float to double is also an unnecessary cost. I would recommend not mixing float and double.
I regard float as a relic from the days of 16-bit computers. I'd be rather less likely to use float than short int. In the case of integer values, the shorter value may be perfectly adequate.
However, in the case of floating point, the loss of precision may gradually increase during repeated calculations, so the final result will tend to be less accurate than the maximum which can be held by that data type.