2 and 7 are integer literals, and dividing two integers does integer division, which discards any 'decimal' part of the number.
If you want 3.5, one operand must be a floating-point number.
2.0 / 7;
or
2.0f / 7;
,
2 / 7.0;
etc.
double is the 'default' type when you make a floating-pointer literal, for example
3.5
.
To make a float literal, you need to do
3.5f
.
Regarding performance,
See this SO link:
https://stackoverflow.com/questions/4584637/double-or-float-which-is-faster
The answer is, it depends on hardware, and how the data is being used. It may very well be that double is faster and the FPU needs to convert floats to doubles (or a floating-point number that is neither a float or a double), and doubles are faster in the regard. Yes, doubles may use up more memory, so it depends on what the bottleneck is.
The generic advice I've heard is to just use doubles unless you specifically have a reason to use floats. For example, the GPU
may be optimized for floats instead of doubles, if you're passing a lot of data to the GPU. In OpenGL's GLSL, a
vec3
is a vector of floats, and
dvec3
is specifically a vector of doubles, so you can see that floats are the more 'natural' option in that domain. (doubles weren't even available until OpenGL 4.0)