I have a general question about a C++ program execution.
I was running the same program for several days without rebooting my system and everything was fine, I was always getting same result at every run. Then my system crashed, and I had to reboot the system. I ran exactly the same program without re-compiling it, but this time I got a slightly different result on a numerical computation (the difference between this result and previous result is of 1e-8 order of magnitude).
So my question is : is it possible that two runs of the same program on the same machine produce two different results (there was just a reboot between the two runs) ? or is it a bug from my program ?
Unless you have faulty hardware, it's likely that it's a bug in your program. You're probably relying on undefined behavior, such as uninitialized variables, out-of-bounds array accesses, other invalid pointer accesses or faulty synchronization (if the program is multi-threaded).
thank you for the quick reply.
Indeed I might have a bug in my program, but in general I was wondering if the state of the CPU/memory might have any effect on floating-point computation precision (under the hypothesis that there is no bug such as the ones you suggest). Apparently the answer is no, unless there is some hardware problem.
it's possible the computations just have different results
No. Floating point operations may be imprecise, but they're still deterministic. If two runs of the same program produced different results, the only possibilities are that the input (including input from such things as the system clock) was different, or that there's a bug in the program. My bet is uninitialized objects.
Given a program that uses floats, it's hard to predict its exact behavior on a given computer without running it because of the way floating point values are converted back and forth between memory and extended precision floating point registers. I posted a question one or two years ago about a program that would recurse infinitely or return after a few recursion levels depending on whether a floating point value was printed or not.
However, as unpredictable as they are, floating point operations are still deterministic. For example, y=int(fmod(sqrt(x)*1E+10,10.0)) can give only one value y for each value x.
So yes, floating points are deterministic, but part of what determins them are outside the scope of C++... effectively making them somewhat non deterministic.
You're confusing determinism and predictability. Like I said, it's hard to predict the behavior of a correct (C++) program that works with floats Because you don't know exactly what the compiler will do or how the hardware will behave. That's unpredictability, and it arises from missing information. But once the program's finished and you got output, predicting all future results of the same (in the binary sense) program running on the same computer for the same (again, binary) input is trivial. That's determinism.
Technically, cos() will give the exact same result for a given input. The discrepancy arises from the compiler generating different code for different calls depending on a myriad of factors. The code is deterministic, it's just that you're trying to compare different algorithms applied to the same input, not the same algorithm applied to the same input. It's not cos() that's non-deterministic; it's the compiler (sort of).
The discrepancy doesn't translate to whole programs, because the program code doesn't change between runs (unless you recompile it).
I guess where I'm fuzzy is whether or not excess register bits are zeroed when loading a data type smaller than the register itself, or if the bits are left unchanged from previous operations.
If the registers aren't zeroed, the excess garbage in them would impact floating point calculations. It could explain the different results in this case. The same way using an uninitialized variable would. The thing is the difference would be much less significant -- as is the case here.
I was under the impression registers are not zeroed, but my understanding of how modern FPUs work is limited to say the least.
I was under the impression registers are not zeroed
The compiler wouldn't generate code that would cause a register with uncertain state to be used as a source operand. The real question is whether the data that was copied to that register is meaningful.