1. What is the disadvantage of always assigning a thread to a different CPU core each time it is allowed to execute? |
Yes, I think "moving" a thread between different CPU cores can have a negative effect on the CPU caches. Caches always need a "warm up" phase until they become effective. By moving the thread around, you have to go trough that "warm up" phase again, at least as far as L1/L2 caches are concerned. L3 is usually "shared".
But you really should let the OS decide which "core" a thread should run on. That is especially true with the latest processors that have separate "efficiency" and "performance" cores – plus a
thread director. And even older processors with only one type of cores have so-called "preferred" cores that the OS tries to "fill" first.
2. What is the main differences between mutexes and semaphores, specifically a key difference between the mutex and a binary semaphore. |
Two key differences, IMO:
1. A
semaphore has a
counter and it
only "blocks" when a thread tries to acquire (decrement) the semaphore when its counter is already at
zero (otherwise the counter is decremented
without blocking), whereas a
mutex only knows two states,
locked or
free, and it "blocks" when trying to acquire it in the
locked state.
2. The thread that successfully locked (acquired) a
mutex is considered to be "owning" that mutex. Usually,
only the thread currently "owning" a mutex is allowed to
unlock that mutex again! Conversely, the
increment and
decrement operations on a
semaphore can be (and often are) performed by totally
different threads.
💡 Because of (2), a
mutex is
not exactly the same as a
semaphore with the counter limited to a max. of
1.
As an aside: In Win32 programming, there are "mutex" and "semaphore" objects that can be shared between
different processes for inter-process synchronization. At the same time, a "critical section" (a
light-weight and more efficient "mutex") is local to a process, i.e. it can
not be shared across process boundary.