@
PhysicsIsFun
Almost, I made my diploma in the central research department of a pharmaceutical company, however made my living with computing. What they do at CERN is far outside my sphere, but I am still interested in "everyday's science". When your experiment shows deviations from the theoretical predictions, it may be related to the 'closed system' is not so closed as in theory. You could try to compensate by appropriate measures (trace heating, ...), but that is just an idea, a quite theoretical one ;)
I mentioned Mandelbrot set because there is (was?) the rumour, its shape would be the result of deviations at every iteration, the sum of many small errors. You have to magnify very strong to get a significant effect from your system accuracy. Or for "normal" views you have to alter quite distinct the iteration results to cause abnormalities. Also the a. m. paper (discussion to be precise)
"How to calculate the errors of single and double precision" shows, that with 10e3 iterations there is almost no difference, whereas 10e5 iterations show a distinct effect.
Back to the subject: there is a FORTRAN compiler option
autodbl, that propagates all variables to their enhanced precision homologous
without the need of changing source code. If there is a similar option for C++ you could very quickly check the effect of doubling variables' size.