I use MSVC on my Windows machine, and G++ on my Linux one. I would like to learn about optimizations made by both of them.
Say I have this loop:
1 2 3 4 5 6
#define PI 3.141592654
double x = 0;
while (x < 2*PI)
{
x += PI/4;
}
Are both compilers smart enough to change 2*PI to 6.283185308 at compile time, and PI/4 to 0.7853981635, or would it leave them be, and the program would have to recalculate both values for every iteration of the loop? What if I made PI a const double instead of #defining it, or even a plain double? Obviously in this case it wouldn't be a big deal if it had to recalculate upon each iteration, but this might matter more for a loop that iterates 1 billion times instead of just a few.
Is there a book on compiler optimizations, or some equivalent text? I am interested in them, and want to learn more about them.
Are both compilers smart enough to change 2*PI to 6.283185308 at compile time, and PI/4 to 0.7853981635
Yes.
What if I made PI a const double instead of #defining it,
Yes.
or even a plain double
In program that simple I would expect this too, however floating point arithmetics is tricky and compilers will probably be careful with it.
I have checked generated assembly. With define, const double and constexpr compiler would just precalculate result of calculations eliminating loop completely. With just double compiler precalculated values for 2*PI and PI/4 before loop and used them in loop. It did not risc with complete optimisations because floating point flags could be changed in your program intentionally, affecting calculations.
EDIT: forgot to add that I placed PI in global scope. At function scope, I got same results as JLBorges
AFAIK, the Microsoft compiler won't optimise to this extent (it doesn't have generalised constexpr support to the extent mandated by the standard; it may not be able to evaluate this entire function at compile-time.)