Im fresh to the programming world and would like to ask a question.
I need to make a program that should have 10 declared variables from appropriate type and later on i need to assign those 10 variables with these values:
200, -100, 15.123, 15.56789, T, true, “true”, 2/5, 1/4, a.
I managed to get all of these printed on screen except 2/5 and 1/4 i got zero printed on screen for those two.
Is it wrong that use float for those two numbers or am i doing something else wrong?
First off I would change the "float" to "double". The double has greater precision than a float and will store the number better. These days "double" is the preferred floating point type for variables.
On lines 15 and 16 you are using two "int"s to create a double and loosing the decimal portion of the number just to store ".0000..." in the variable. If you change the numbers to "e = 2.0 / 5.0;" and the same for the next line you should see a difference.
What you have with your program works, but you could just as easily have initialized the variables when they are defined and eliminate lines 11 - 20.