thread t1 { [&](){
f(x); // sequential operation...
{ // sync, restrict the scope
unique_lock<mutex> xlock(mutex_);
tM_continue = true; // modified under the mutex
}
condvar.notify_all(); // "(the lock does not need to be held for notification)"
...
}
thanks for the quick reply!
The rule is there and I cannot challenge it. I wonder what is the reason behind though, as I tend to write my own locks [...].
The only way I can think of a scenario leading to a deadlock is only if any of these statements have been re-ordered:
Intel x86-64 cannot reorder these instructions, because, well they must involve some kind of stores [1]. Which brings to the dock the compiler, even when the source code is built with -O0. So, if a compiler barrier were to be added between the three statements, then the code produced should be deadlock free & correct?
This is my understanding (I'm not an expert on locks, so it may not be completely accurate):
With modification without the mutex, the second waiting thread may not have reached the wait state on the condition variable (it may be just before it) when the first thread issues the notify call. Condition variables do not remember events, and the notification may be lost.
With modification under the mutex, when there is a receiver thread, the notification would only be sent when the condition variable (the receiver thread) is in a waiting state.