You are getting undefined behavior; the fact, for example, hel() was evaluated before lo() was what you were expecting doesn't mean it is suddenly defined.
There is no undefined behaviour here.
The function calls in hel() + lo() + wor() + ld() are indeterminately sequenced - they may be evaluated in any arbitrary order, but the function executions would not interleave with each other.
JL, from what I've read online that seems to be right. I understand that there is an order of evaluation, but why would something be evaluated in a different order than it is executed?
The evaluation of an expression with a function call operator eg. F( A1, A2 ) involves:
a. Evaluate F (say to get fresult)
b. Evaluate A1, A2 (say to get arg1, arg2)
c. Evaluate fresult( arg1, arg2 )
Step c. is the function execution.
Note:
Before C++17, the evaluations of F, A1 and A2 are unsequenced with respect to each other
Since C++17, the evaluation of F is sequenced before the evaluations of A1 and A2; the evaluations of A1 and A2 are indeterminately sequenced with respect to each other
1 2 3 4
using fn_type = int ( int ) ;
fn_type* fn_pointers[5] = { /* .... */ } ; // array of pointers to functions
int i = 2 ;
fn_pointers[i]( ++i ) ; // undefined behaviour prior to C++17
If a variable is subject to it's value being changed more than once in a sub-expression, that would necessitate having limitations - possibly because the evaluation is carried out from the leaf nodes moving up the tree? I define a sub expression as being part of a full expression ended with a semicolon.
This is all a guess on my part - I may not have quite the right wording to express my idea.
In Chervil's code, I notice the oss expression had been evaluated right to left. I wonder if that is because of the use of AST or is something else going on there?
Could I also observe that all the rules C++ has about various (sometimes seemingly simple )things, are there out of necessity?
> I wonder if these rules about evaluation are related to (or caused by) the compiler's probable use of the Abstract Syntax Tree.
Rules about expression evaluation (and in general undefined behaviour) are there to allow implementations a large amount of leeway in their attempt to generate as efficient code as possible.
Undefined behavior exists in C-based languages because the designers of C wanted it to be an extremely efficient low-level programming language. In contrast, languages like Java (and many other 'safe' languages) have eschewed undefined behavior because they want safe and reproducible behavior across implementations, and willing to sacrifice performance to get it.
LLVM Blog - http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
P0145R3 (C++17):
The current rules have been in effect for more than three decades. So, why change them now?
...
The changes suggested below are conservative, pragmatic, with one overriding guiding principle: effective support for idiomatic C++. In particular, when choosing between several alternatives, we look for what will provide better support for existing idioms, what will nurture and sustain new programming techniques. Considerations such as how an expression is internally elaborated (e.g. function call), while important, are secondary. The primary focus is on what the programmer reads and writes, in particular in generic codes, not what the compiler internally does according to fairly arcane rules. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r3.pdf
Do many rules exist to: disallow things which are logically wrong, or there is a good reason as to why they shouldn't be allowed? Some of the reasons may or may not be obvious.
Other things may not be so obvious - I was thinking evaluation by the AST might be one of those, but there are probably lots of other examples.
I am not trying to counter what you have mentioned, just that I guess there is firstly rules dictated by sheer logic, then there rules which give compilers a great deal of leeway to allow for many optimisations as mentioned in the articles you linked. My main point is that the underlying reasons for these logic related rules may not be so obvious. Is that a reasonable way of looking at it?