I think there's a slight confusion here: The two functions are never supposed to run simultaneously. |
That is a little confusing. If the two functions are never run simultaneously, then what's the benefit of putting them in seperate threads?
Speaking of which, if a thread is associated to a function, is the program's main thread associated with the function main() (or WinMain, etc.)? |
I guess you could say that. main() is the entry point for the main thread, just as <yourfunction> is the entry point for whatever additional thread you spawn. But there isn't really any association between them other than the function being the entry point.
I was also quite curious about what exactly makes the difference between main() and WinMain() or DllMain() at compiling time |
main() is the standard entry point as dictated by the C++ standard.
WinMain is WinAPI's custom entry point. It exists so that additional, platform-specific information can be passed to the program's entry point (particuarly the HINSTANCE of the program).
I like never use DllMain but it's the same idea as WinMain, just for DLLs.
Looks like I was under the wrong impression that declaring a variable as volatile would spare me from most of the thread problems. |
In C++, 'volatile' is rather ill-defined... sort of. It's actually very clearly defined by the standard, but different compilers (MSVS) like to give their own special meaning for the word.
Strictly speaking, all 'volatile' does is it guarantees that accesses to a variable will do a read/write to memory. This removes optimization possibilities. For example... normally the compiler might choose to keep a variable in a register so that accesses can be faster -- but with the volatile keyword it isn't allowed to do that.
It's really not that useful on its own.
However MSVS takes the volatile keyword a step further and puts a memory barrier around accesses. I think it might also ensure that accesses to the variable are atomic (but don't quote me on that!).
Both of these make the individual variable thread-safe on its own... but don't necessarily make the whole program thread-safe. For example in your code.. even if all accesses to all variables are atomic... and the memory accesses are behind barriers and occur in the order you expect, you could still get screwed:
1 2 3 4
|
CallList[CallListSize]=NULL;
// if 'CallListSize' is modified here -- you're boned
CallListParameters[CallListSize]=NULL;
CallListSize--;
|
But really... you shouldn't rely on 'volatile' doing this. Because like I said it only works in MSVS... so if you try to build on another compiler it'll be disasterous. What's worse... behavior isn't even consistent across different versions of MSVS. Some versions do it, others don't. So really I would avoid it altogether.
I'll look into mutex-es as soon as possible. |
They're conceptually very simple. A mutex can only be locked by one thread at a time. If you try to lock and another thread has it locked already, the thread will stop (sleep) and wait for it to be unlocked. This ensures that two threads are not trying to access sensitive data at the same time.
Furthermore, they form a memory barrier, so when you unlock a mutex, you are guaranteed that all previous writes that the thread has performed are "done". (memory barriers and pipelining are tricky to explain -- reply if you're interested and I'd be happy to give a crash course).
So for an example of a mutex, let's look at some broken code:
1 2 3 4 5 6 7 8
|
// thread A
foo++;
// thread B
ar1[foo] = x;
// <- caution!
ar2[foo] = y;
|
If thread A runs its foo++ line while threadB is on the 'caution' line, you're boned because ar1 and ar2 will fall out of sync. To make sure this never happens, we can put those accesses behind a mutex:
1 2 3 4 5 6 7 8 9 10
|
// thread A
mymutex.lock();
foo++;
mymutex.unlock();
// thread B
mymutex.lock();
ar1[foo] = x;
ar2[foo] = y;
mymutex.unlock();
|
Now we are guaranteed that only one of those blocks of code will be run at a time. So it is impossible for the foo++ line in thread A to interrupt the array updating in thread B.
In my code, I used RAII constructs 'unique_lock' and 'lock_guard' which basically automate the process of locking and unlocking.
For example my code here:
1 2 3 4 5
|
{
thd::lock_guard<thd::mutex> lock(queueMutex);
wantExit = true;
queuePending.notify_one();
}
|
Is the same as this:
1 2 3 4 5 6
|
{
queueMutex.lock();
wantExit = true;
queuePending.notify_one();
queueMutex.unlock();
}
|
The lock_guard object will automatically lock the mutex in its constructor and unlock it in its destructor. Use of these is advised because if something throws an exception, it would normally skip over the unlock() keeping the mutex locked (which might be trouble) -- but wtih RAII the destructor would still kick in even if there was an exception, so it will always unlock the mutex.
Anyway blah blah blah. Hopefully I'm clarifying things and not confusing you. I'm happy to answer more questions. This stuff is actually a lot of fun for me. :)