I need to write a program that converts an input of money in the format of dollars and cents (2.53) and output it into how many quarters, nickles, dimes, pennies that is.
I'm stuck here at this part, when I multiply the input by 100 to turn it into an integer so I can divide by modulus, sometimes the multiplied by number will be one off.
Try using double, not float as your floating point type. Floating point variables are only 32bit, whereas double should be 64bit, and thus have a greater precision.
(Unless, of course, this is an assignment and your lecturer specified that you must use float...)
Anyone? This doesn't make any sense, if I multiply anything by 100, it should yield it to be even, I don't see why some amounts I'm typing in, they're giving me an integer but -1