Hey guys. I am trying to make a program that will convert a $ amount to change. Ex. $4.35 = 4 dollars, 1 quarter, 1 dime, 0 nickels, 0 pennies.
The problem is that when you put in some values, it doesn't work correctly.
So to find the problem, I removed the Penny and Nickel If statements.
So with the only the dollar, quarter, and dime decision statements remaining...:
Ex. When I put in $0.35 it gives me 0 dollars, 1 quarter, and 0 dimes. with 0.10 cents still remaining. How is this So???
My brother was helping me debug this and we found this:
When I enter .35 for change, it is showing up as 0.349999999999999998 on the debugger.
This only happens on certain numbers. like for .25 it is 0.250000000000000001.
Yes, but floating point arithmetic is a relatively esoteric topic. Without going into detail, many values that are exactly representable in a decimal system cannot be represented in a binary system, and floating point math usually accumulates rounding errors.
Floating point numbers should not be compared for equality if you can help it.
Because of rounding error, real financial software will not use floating point arithmetic, at least without taking special precautions. For your purposes, you can use two integer values: one for cents, the other for dollars, or maybe just one integer, keeping all your values in terms of cents.
It's worth noting that this greedy algorithm doesn't minimize the number of coins for every coin system. The change-making problem is a textbook dynamic optimization problem.
Ok thanks for the replies. I did a bunch of reading on this topic last night so it is a big surprise to me that I have not had a problem with it until now.
when i had to do a similiar probably my teacher just said to use int and use the last two digits as decimals. I.E 12866 would be $128.66 and 24 would just be $.24