@moschops.
if you have a double and double and then you have float and double and you multiple both pairs by lets say 100000, the difference will great.
"Double" accuracy doesn't mean anything. A "double" can only represent a very small subset of numbers. Saying you need "double accuracy" is meaningless. The only accuracy that matters is "accurate enough for this particular case". I can present an infinity of numbers that a double cannot represent, and an infinity of values for which any given double becomes ever more unable to even approximate.
It is certainly possible to take a perfectly accurate double, and multiple it by a perfectly accurate double, and get a result that is inaccurate, so if you're dexpecting that a double multiplied can be permanently labelled "accurate", you've misunderstood how floating point numbers work.
The thing is that i really don't know how doubles and float exactly work.
I have read about them but they seems really complicated to me.
They work very much like your scientific calculator. You calculator might have 10 digits of accuracy and a 3 digit exponent. So numbers are represented as M.MMMMMMMMM * 10^EEE where M.MMMMMMMMM is called the mantissa and EEE is called the exponent. The numbers maybe *displayed* without the exponent, but they are always represented that way. In other words, 1,234,567 is actually represented as 1.234567 * 10^6.
In float and double, it's basically the same, only the mantissa and exponents are in binary instead of decimal. So basically each is good for some degree the accuracy (the mantissa) and some range of large and small values (the exponent).
Then i have float s what will usually be something like 0.0001 - 2.0
0.0001 - 2.0 can't be represented exactly using float. So when you multiple a number times this, the result will also be inaccurate. Then only question is how close will it be.
When working with float and double, if you want to see if two numbers are equal, you really need to see if they are close to each other, so rather than saying if (a == b)
say if (fabs(a-b) < smallNumber)
One other thing, float does not have the same precision as double. float has 6 or 7 significant figures of precision, while double has 15 or 16 significant figures of precision. You can see the effect from Chervil's example.
The precision of float is easily exceeded, so if you are worried about it, prefer to use double. The default type in C and C++ is double, so stick with that, unless some graphics library requires float, or you have billions of them and have some way maintaining accuracy.
As mentioned by others, it's the accuracy (a different concept than precision) which matters. Even though there is more precision in a double, it doesn't mean there is absolute or even constant amounts accuracy in all situations.
Having to deal with some precision value (such as +/- 0.005m), is sometimes a good thing, as it models what happens in real world measurement: Nothing is ever exact, so one is forced to allow for this. On the other hand it is a pain when not related to physical measurement, because one just wants an accurate number.
Some compliers have the ability to use exact decimal types, gcc has various types that can be found by #include "/decimal/decimal.h"