When I run some calculation with variables defined as "double", for example: 3.0. I see in the debugger that the value is actually 2.999999999997. It is very close, but not exact. Some articles on internet said this is because the float point problem.
Is there anyway to increase the exactitude of these "double" value?
Thank you for your answers. I have checked the link mentioned above. If we really need some high compute precision, we can use library like GMP, but it seems that occupies more compute power, which means lower speed...
Otherwise, keep with normal "double" and be careful with this problem...