
please wait
Couldn't a compiler figure out the (int *) cast trick too? |
And...isn't the fact that the address of i, and the contents of i don't match up...a B-U-G? |
Depending on how smart it is, a compiler can take advantage of an object being a constant in several ways. For example, the initializer for a constant is often (but not always) a constant expression (§C.5); if it is, it can be evaluated at compile time. Further, if the compiler knows every use of the const , it need not allocate space to hold it. For example: const int c1 = 1; const int c2 = 2; const int c3 = my_f(3); // don’t know the value of c3 at compile time extern const int c4; // don’t know the value of c4 at compile time const int *p = &c2 ; // need to allocate space for c2 Given this, the compiler knows the values of c 1 and c 2 so that they can be used in constant expressions. Because the values of c 3 and c 4 are not known at compile time (using only the information available in this compilation unit; see §9.1), storage must be allocated for c 3 and c 4 . Because the address of c 2 is taken (and presumably used somewhere), storage must be allocated for c 2 . The simple and common case is the one in which the value of the constant is known at compile time and no storage needs to be allocated; c 1 is an example of that. The keyword e x t e r n indicates that c 4 is defined elsewhere (§9.2). It is typically necessary to allocate store for an array of constants because the compiler cannot, in general, figure out which elements of the array are referred to in expressions. On many machines, however, efficiency improvements can be achieved even in this case by placing arrays of constants in readonly storage. |
First, I hope the "shut up" was meant in the funny way |
Couldn't a really smart compiler see that I'm accessing both i and &i and decide that that consistency must be maintained, and provide an error or warning at compile-time |
or just outright disallow the optimization and make the cout statement actually look up i instead of trusting it's 10? |
even though it can see, in the same scope, that it is being changed? |
By the way, how do you change what a reference references? I'd like to know it! (Not to use it, but just to know it.) |
Couldn't a given compiler not do this optimization in version, say, 3.0 (i.e., look-up i in memory rather than "hard-compile" it as "10"), and then institute this optimization in version 3.5, making the code act differently than the developer intended or expected? |
I wonder what other scenarios allow for different results in the behavior in a compiled program for the same source code using different compilers. |
|
|
And if these ambiguities exist, I wonder if there's a way to code that is the most conservative |
By the way, having Compilers A and B is not unusual, right? I've worked in environments where the same source code was used in Xcode, Visual Studio, and various compilers on Unix and Linux. I guess there are always preproceesor directives to account for differences. But that assumes you're catching all the issues before your customer does. |