Hey guys, I would like to make a program that asks a user to input a number (float) and I'll show that number to the user with a precision of two decimal places.
For exmaple, if a user enters 15. My program will show an output of 15.00
But if the user enters 15.151, then my program will only show 15.15
In the example you gave, the ios::floatfield specifies a group of flags. It is passed as the second parameter, the mask field. Only the flags which are part of that group will be affected by the call to setf()
There are just two flags in the group. One, both or none of these may be set:
1 2
ios::fixed
ios::scientific
It's easy to get in a tangle when trying to understand bit flags. One way of looking at this code