C++ multithreading deadlock example

1
2
3
4
5
6
7
std::mutex m1, m2;
void f_1() {
  std::unique_lock l1{m1}, l2{m2};
}
void f_2() {
 std::unique_lock l1{m1}, l2{m2};
} 
Suppose f_1 starts first and l1 locks m1, and l2 locks m2. Then l1 in f-2 goes for locking m1 and l2 for m2 which are already locked. So a deadlock occurs and l1 and l2 locks in f_2 need to wait until m1 and m2 get released at the f-2 function exit, even though the order of locking is the same.
The same scenario also happens for f_1 if f_2 starts first. Are my assumptions correct?
Last edited on
If m1 is already locked then l1 will have to wait. l2 will not try to lock m2 until after l1 has acquired its lock.

This just follows normal initialization order. First l1 is initialized. Then l2 is initialized. They are not initialized at the same time.
Last edited on
So I change my mind, no deadlock occurs. When any of the functions starts firstly, the locks inside it lock the mutexes. Other function's locks have to wait. When the first function exits, the locks will be unlocked automatically. Then it's time for the second function's locks to lock the mutexes freely.
Correct?
Yes, correct.
When any of the functions starts firstly, the locks inside it lock the mutexes. Other function's locks have to wait. When the first function exits, the locks will be unlocked automatically. Then it's time for the second function's locks to lock the mutexes freely.
Correct?
std::unique_lock implicitly locks the mutex in its constructor – blocking the thread until the mutex becomes available, if already locked.

This does not necessarily happen when the "functions starts", but in your example that is the case.

Furthermore, std::unique_lock implicitly unlocks the mutex in its destructor, i.e. when the std::unique_lock instance goes out of scope.

Again, this does not necessarily happen when the "function exits", but in your example that is the case.


Consider this code:
1
2
3
4
5
6
7
8
9
std::mutex m1;
void f_1()
{
  std::cout << "This is printed *before* the mutex will be locked" << std::endl;
  std::unique_lock l1{m1}; // <-- possibly blocks !!!
  std::cout << "This is printed *while* the mutex is locked (owned) by this thread" << std::endl;
  l1.unlock();
  std::cout << "This is printed *after* the mutex has been unlocked" << std::endl;
}



Also, if you want to lock multiple mutexes "at once", have a look at std::scoped_lock, because it provides "deadlock" avoidance:
https://en.cppreference.com/w/cpp/thread/scoped_lock/scoped_lock
Last edited on
I knew what you said but thanks for the explanation.
Isn't std::scoped_lock generally very inefficient compared to std::unique_lock?
Last edited on
std::recursive_mutex is not a replacement for std::scoped_lock or std::unique_lock.

std::recursive_mutex is something you use instead of std::mutex if you want to allow a thread to lock the mutex while it has already been locked by the same thread without causing a deadlock.
yeah, yeah, I just messed the thing temporarily, that's why I edited the post quickly. But thanks to you too.
My question is what has written in my post above.
Isn't std::scoped_lock generally very inefficient compared to std::unique_lock?

I think std::scoped_lock is exactly the same, if you lock a single mutex.

If you need to lock multiple mutexes "at once", there probably is some additional overhead in std::scoped_lock, but possible "deadlock" is avoided. Locking multiple mutexes with individual std::unique_locks requires you to very carefully arrange things at every place where those mutexes are locked, in order to avoid deadlocks... For example, change the order of l1 and l2 inside f_2() in your example above and you certainly have a problem. In such simple example it may be easy to see, but in a complex real-word application...
Last edited on
got it, thanks.
Topic archived. No new replies allowed.