In what circumstances would you use std::condition_variable instead of std::atomic? |
the choice isn't so much b/w c_v and atomic but rather, as mentioned in the previous post, b/w a 'normal' bool and std::atomic<bool>. A c_v is a variable by which a thread can wake up or one (or multiple) waiting threads whereas atomicity …
means that read or write access to a variable (in your case std::cout) or to a sequence of statements happens exclusively and without any interruption, so that one thread can't read intermediate states caused by another thread. … In general, reading and writing even for fundamental data types is not atomic. Thus you might read a half-written Boolean, which according to the standard results in undefined behavior. |
'The C++ Standard Library (2nd edition)' by N Josuttis, section 18.4.4, 18.7
So we could still combine a c_v with std::atomic<bool> in the second case:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
|
# include <iostream>
# include <thread>
# include <atomic>
# include <condition_variable>
# include <mutex>
std::atomic<bool> readyFlag{false};
std::mutex mutexReady;
//now need the mutex for consuming the condition_variable
std::condition_variable cond_varReady;
void function_1()
{
//wait until main thread is ready (readyFlag.load() == true)
{
std::unique_lock<std::mutex> l(mutexReady);
cond_varReady.wait(l, []{ return readyFlag.load(); });
//have to use load() since copy ctors are deleted for std::atomic<>
}
//release lock
//now do your stuff
for (int i = 10 ; i > 0 ; --i)
{
std::cout << "From t1: " << i << std::endl;
}
}
int main ()
{
std::thread t1(function_1);
{
//do the printing from the main thread first
for (int i = 1 ; i <= 10 ; ++i)
{
std::cout << "From main: " << i << std::endl;
}
//signal that main thread has prepared a condition
{
std::lock_guard<std::mutex> lg(mutexReady);
readyFlag.store(true);
}//release lock
cond_varReady.notify_one();
}
t1.join();
}
|
when specifying std::launch::deferred, for what elapsed period is the defer for? |
as long as get() (or wait()) is not called and the program can terminate safely without either being called unlike std::thread objects where either join() or detach() has to be called on the thread object before the lifetime of the thread object ends or a move assignment to it happens, otherwise the program aborts calling std::terminate(). This allows
lazy evaluation with std::future<> objects:
1 2 3 4 5
|
auto f1 = std::async(std::launch::deferred, someFunc_1);
auto f2 = std::async(std::launch::deferred, someFunc_2);
//...
auto val = thisOrThatIsTheCase() ? f1.get() : f2.get();
|
so the return value of either someFunc_1() or someFunc_2() is assigned to val depending on the outcome of thisOrThatIsTheCase()
t1.get() simply returns the result, so it has nothing to do with waking up the thread from its deferred state. Is that assumption correct? |
the call of get() results in one of 3 things:
(a) if function_1() was started with async() in a separate thread and has already finished, we immediately get its result
(b) if function_1() was started but not finished yet, get() blocks and waits for it to end and yields the result
(c) if function_1() not started yet (as in our example due to std::launch::deferred), it will be forced to start now and, like a synchronous function call, get() will block until it yields the result
in call cases get() can pass on either the return value of the function or any exception throw by the function
… unsure what exactly std::async does? |
async() provides an interface to let a piece of functionality, a callable object run in the background as a separate thread, if possible |
– as above, section 18.1
Apart from the Josuttis book mentioned above another good reference for C++ concurrency is: 'C++ Concurrency in Action – Practical Multithreading' by Anthony Williams