I've come to the formal explanation of literals, and the manual reads as follows.
All literal values have a data type, but this raises a question. As you know, there are several different types of integers, such as int, short int, unsigned long int. There are also different types of floating point. The question is: How does the compiler determine the type of a literal. For example, is 123.23 a float or a double.
Then it gives an answer. The problem I have is that I thought the compiler was told the type, whether it is a float or a double, when the variable was declared.
Can someone tell me why this is not the case?
(can provide more of passage from book if needed)
Because 1.2 is a double literal. 1.2F is a float literal.
Without the F, 1.2 is a double and it will be cast by the compiler to a float. This should generate a warning, and could lead to loss of data (it probably won't in this case, but it the numbers are much large or the precision greater then it will happen).