This is not an issue with typecasting, but rather is an issue with floating point precision. Floating points have a fixed size, but are expected to represent a gigantic range of numbers, so they cannot possibly represent some numbers with 100% accuracy.
In your case, 52575.8119999.. is as close to a representation as they can get to 52575.812
really? double precision should have no problem representing numbers to the third decimal point. I am getting erroneous results in my program because I am comparing a variable to see if it is greater than or equal to 52575.812 and the program thinks it is not because of the 52575.8119999 issue.
really? double precision should have no problem representing numbers to the third decimal point
Floating point precision does not work how you'd expect. doubles cannot even represent 0.1 exactly (the closest they can get on IEEE 754 representation is 0.10000000149011612).
Floating point involves approximation in how numbers are represented. And when you take two approximated numbers and divide them -- the result may be slightly off from what you expect... since the numbers you are dividing are not exactly what you want them to be.
This is known as "rounding error" and is to be expected with just about any floating point computation.
How big the error is can vary, but you should expect it.
EDIT:
You might be able to improve precision and reduce rounding error for additional CPU cost. Check in your compiler settings. I know Visual Studio allows you to switch between a few settings to balance between speed/accuracy for floating point computations.
Another alternative is to use rationals instead of floats. The only problem is that it's not difficult to overflow them after a few divisions/multiplications.