Yes, this is undefined behavior. You're essentially storing a pointer to a stack variable and dereferencing it later. You would expect this to print -5 but it prints a junk value.
Oh, as for no warning, I'm not sure. I don't get a warning on MSVC, but I have a feeling it's too deep of a problem for the compiler to see. It would need to see that you're making a lambda that's capturing local variables, passing a reference to that lambda to a function and that function is saving a reference to that lambda. I would expect a linter to be able to see this though.
Yes, when I compile this program with optimizations turn on I get -5 with GCC. If you're trying to test for compliance though, you want to compile with -O0. Clang prints junk no matter what you do, but GCC will only print junk with -O0, with -O3 it will print -5. Also, on MSVC it printed garbage, but I think that was with optimizations on.
I remember a time when it was actually encouraged to grab a local routine's stack variable value immediately after a routine exited on the grounds that it 'would always be still good from its last value' for long enough. It always worked, but hardware & OS etc were much simpler.
Can you cite any sources? On any architecture with asynchronous interrupts, the contents of the stack past the stack pointer are undefined as soon as the stack pointer is moved.
I can only imagine it would work on a system with predictable interrupts. E.g. interrupts only happen every 15 ms.
no, everything out there now is a smackdown on the practice. I distinctly remember being told that it was acceptable by multiple people way, way back. That does not make them right, though. Would have been pre internet even, so mid / late 80s.
Would be interesting to see actual production code like that, or documentation that explicitly gave directions to do it, but it's probably just lost in time.
no, everything out there now is a smackdown on the practice. I distinctly remember being told that it was acceptable by multiple people way, way back. That does not make them right, though. Would have been pre internet even, so mid / late 80s.
Still a really bad idea. On MS-DOS the OS uses the stack on your application to run interrupt handlers. So your program is running along, you move the stack pointer back and return but before if you can read that value back a hardware interrupt occurs, the DOS kernel pushes a and pops a bunch and those are now junk values.
The real bad thing about that is you'll never see it in a debugger. You'll disassemble your program, step through it one instruction at a time and it'll work fine. You'll compile with and without optimizations and it'll work fine. You'll run the program once or twice and it'll work fine. But it'll just fail at random when you actually try to use the program for an extended period of time.
My point is there are often issues you're not even aware of that are completely transparent to you that could mangle the stack. Even if the assembly output looks fine, too.