Another question, If you put a "large" negative number which included decimal points, would it automatically put it in a signed float? Similar to adding the decimal places turned it into a double. |
Whatever number one inputs (via std::cin say) will be converted to the type of the variable that input is going into, provided there are no errors and realise there might be loss of precision.
The default for floating point numbers is
double
. double is preferable because the precision of float is easily exceeded. float can do 6 or 7 significant figures, while double can do 15 or 16. So only use float if a library requires it (typically a graphics library ) ; or if you have billions of them.
If it's a literal with either a decimal point or an exponent (e or E), then the default is double, so these are doubles:
1 2 3
|
std::cout << 22.0 /7.0 << "\n";
std::cout << 2.0 /7.3 << "\n";
std::cout << 0.1 /7.0 << "\n";
|
Note I always put digits both side of the decimal point which is always present. This reinforces the idea that it is a double.
Also note that the difference if those literals were forced to be floats: If more decimal places were displayed, the doubles will still be accurate to 15sf, while float is only 6sf
One can force a float by appending a 'f' to the literal:
1 2 3 4 5 6 7 8
|
float a = 10.0f;
float b = 10.0 ; // 10.0 is a double, implicitly cast back to float again
double c = -123456789.123456; // fine
float d = -1234567.1; // loss of precision
double e = -1.23e300; // within range of double
float f = -1.23e39; // outside range of float
|
You can find out what the limits of different types are, have a play with the example code a the bottom of this link:
http://en.cppreference.com/w/cpp/types/numeric_limits