I have 2 threads, the producer thread receives some data over an internet socket, constructs an object and adds it to a queue and the consumer thread pops items from that same queue and does some processing.
Since both threads modify the queue (by enqueuing and dequeuing respectively) I think using a mutex in both threads should be sufficient. Is this a good approach or should I use a semaphore? If so, why?
Either can be made to work. It depends on details of the actual design. How many producer/consumer requests will you allow simultaneously? Is it truly limited to two threads, or will the code spawn others as more requests occur?
I found this short blog interesting. It discusses and compares both the Mutex and Semaphore and may give you some ideas.
"A mutex object allows multiple process threads to access a single shared resource but only one at a time. On the other hand, semaphore
allows multiple process threads to access the finite instance of the
resource until available. In mutex, the lock can be acquired and
released by the same process at a time."
Examples in C
One queue accessed by multiple threads using mutex
Blocking Queue in C++ using semaphores (Not C specific, but concept will be same.)
Related
What is the difference between semaphores and mutex provided by pthread library ?
semaphores have a synchronized counter and mutex's are just binary (true / false).
A semaphore is often used as a definitive mechanism for answering how many elements of a resource are in use -- e.g., an object that represents n worker threads might use a semaphore to count how many worker threads are available.
Truth is you can represent a semaphore by an INT that is synchronized by a mutex.
I am going to talk about Mutex vs Binary-Semaphore. You obviously use mutex to prevent data in one thread from being accessed by another thread at the same time.
(Assume that you have just called lock() and in the process of accessing a data. This means that, you don’t expect any other thread (or another instance of the same thread-code) to access the same data locked by the same mutex. That is, if it is the same thread-code getting executed on a different thread instance, hits the lock, then the lock() should block the control flow.)
This applies to a thread that uses a different thread-code, which is also accessing the same data and which is also locked by the same mutex.
In this case, you are still in the process of accessing the data and you may take, say, another 15 secs to reach the mutex unlock (so that the other thread that is getting blocked in mutex lock would unblock and would allow the control to access the data).
Do you ever allow another thread to just unlock the same mutex, and in turn, allow the thread that is already waiting (blocking) in the mutex lock to unblock and access the data? (Hope you got what I am saying here.)
As per agreed-upon universal definition,
with “mutex” this can’t happen. No other thread can unlock the lock
in your thread
with “binary-semaphore” this can happen. Any other thread can unlock
the lock in your thread
So, if you are very particular about using binary-semaphore instead of mutex, then you should be very careful in “scoping” the locks and unlocks, I mean, that every control-flow that hits every lock should hit an unlock call and also there shouldn’t be any “first unlock”, rather it should be always “first lock”.
The Toilet Example
Mutex:
Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.
"Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."
(A mutex is really a semaphore with value 1.)
Semaphore:
Is the number of free identical toilet keys.
For Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.
"A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
Source
mutex is used to avoid race condition between multiple threads.
whereas semaphore is used as synchronizing element used across multiple process.
mutex can't be replaced with binary semaphore since, one process waits for semaphore while other process releases semaphore. In case mutex both acquisition and release is handled by same.
The difference between the semaphore and mutex is the difference between mechanism and pattern. The difference is in their purpose (intent)and how they work(behavioral).
The mutex, barrier, pipeline are parallel programming patterns. Mutex is used(intended) to protect a critical section and ensure mutual exclusion. Barrier makes the agents(thread/process) keep waiting for each other.
One of the feature(behavior) of mutex pattern is that only allowed agent(s)(process or thread) can enter a critical section and only that agent(s) can voluntarily get out of that.
There are cases when mutex allows single agent at a time. There are cases where it allows multiple agents(multiple readers) and disallow some other agents(writers).
The semaphore is a mechanism that can be used(intended) to implement different patterns. It is(behavior) generally a flag(possibly protected by mutual exclusion). (One interesting fact is even mutex pattern can be used to implement semaphore).
In popular culture, semaphores are mechanisms provided by kernels, and mutexes are provided by user-space library.
Note, there are misconceptions about semaphores and mutexes. It says that semaphores are used for synchronization. And mutexes has ownership. This is due to popular OS books. But the truth is all the mutexes, semaphores and barriers are used for synchronization. The intent of mutex is not ownership but mutual exclusion. This misconception gave the rise of popular interview question asking the difference of the mutexes and binary-semaphores.
Summary,
intent
mutex, mutual exclusion
semaphore, implement parallel design patterns
behavior
mutex, only the allowed agent(s) enters critical section and only it(they) can exit
semaphore, enter if the flag says go, otherwise wait until someone changes the flag
In design perspective, mutex is more like state-pattern where the algorithm that is selected by the state can change the state. The binary-semaphore is more like strategy-pattern where the external algorithm can change the state and eventually the algorithm/strategy selected to run.
This two articles explain great details about mutex vs semaphores
Also this stack overflow answer tells the similar answer.
Semaphore is more used as flag, for which your really don't need to bring RTOS / OS. Semaphore can be accidentally or deliberately changed by other threads (say due to bad coding).
When you thread use mutex, it owns the resources. No other thread can ever access it, before resource get free.
Mutexes can be applied only to threads in a single process and do not work between processes as do semaphores.
Mutex is like sempaphore with with S=1.
You can control number of concurrent accesses with semaphore but with mutex only one process at a time can access it.
See the implemenation of these two below: (all functions are atomic)
Semaphore:
wait(S) {
while (S <= 0 )
; // busy wait
S--;
}
signal(S) {
S++;
}
Mutex:
acquire() {
while (!available)
; // busy wait
available = false;
}
release() {
available = true;
}
Suppose I have multiple threads blocking on a call to pthread_mutex_lock(). When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock? That is, are calls to pthread_mutex_lock() in FIFO order? If not, what, if any, order are they in? Thanks!
When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock?
No. One of the waiting threads gets a lock, but which one gets it is not determined.
FIFO order?
FIFO mutex is rather a pattern already. See Implementing a FIFO mutex in pthreads
"If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex."
Aside from that, the answer to your question isn't specified by the POSIX standard. It may be random, or it may be in FIFO or LIFO or any other order, according to the choices made by the implementation.
FIFO ordering is about the least efficient mutex wake order possible. Only a truly awful implementation would use it. The thread that ran the most recently may be able to run again without a context switch and the more recently a thread ran, more of its data and code will be hot in the cache. Reasonable implementations try to give the mutex to the thread that held it the most recently most of the time.
Consider two threads that do this:
Acquire a mutex.
Adjust some data.
Release the mutex.
Go to step 1.
Now imagine two threads running this code on a single core CPU. It should be clear that FIFO mutex behavior would result in one "adjust some data" per context switch -- the worst possible outcome.
Of course, reasonable implementations generally do give some nod to fairness. We don't want one thread to make no forward progress. But that hardly justifies a FIFO implementation!
While reading about binary semaphore and mutex I found the following difference:
Both can have value 0 and 1, but mutex can be unlocked by the same
thread which has acquired the mutex lock. A thread which acquires
mutex lock can have priority inversion in case a higher priority
process wants to acquire the same mutex whereas this is not the case
with binary semaphore.
So where should I use binary semaphores? Can anyone cite an example?
EDIT: I think I have figured out the working of both. Basically binary semaphore offer synchronization whereas mutex offer locking mechanism. I read some examples from Galvin OS book to make it more clear.
One typical situation where I find binary semaphores very useful is for thread initialization where the thread will read from a structure owned by the parent thread. The parent thread needs to wait for the new thread to read the shared data from the structure before it can let the structure's lifetime end (by leaving its scope, for instance). With a binary semaphore, all you have to do is initialize the semaphore value to zero and have the child post it while the parent waits on it. Without semaphores, you'd need a mutex and condition variable and much uglier program logic for using them.
In almost all cases I use binary semaphore to signal other thread without locking.
Simple example of usage for synchronous request:
Thread 1:
Semaphore sem;
request_to_thread2(&sem); // Function sending request to thread2 in any fashion
sem.wait(); // Waiting request complete
Thread 2:
Semaphore *sem;
process_request(sem); // Process request from thread 1
sem->post(); // Signal thread 1 that request is completed
Note: You before post semaphore in thread 2 processing you can safely set thread 1 data without any additional synchronization.
The canonical example for using a counted semaphore instead of a binary mutex is when you have a limited number of resources available that are a) interchangeable and b) more than one.
For instance, if you want to allow a maximum of 10 readers to access a database at once, you can use a counted semaphore initialized to 10 to limit access to the resource. Each reader must acquire the semaphore before accessing the resource, decrementing the available count. Once the count reaches 0 (i.e. 10 readers have gained access to, and are stil using the database), all other readers are locked out. Once a reader finishes, they bump semaphore count back up by one to indicate that they are no longer using the resource and some other reader may now obtain the semaphore lock and gain access in their stead.
However, the counted semaphore, just like all other synchronization primitives, has many use cases and it's just a matter of thinking outside the box. You may find that many problems you are used to solving with a mutex plus additional logic can be more-easily and more-straightforwardly implemented with a semaphore. A mutex is a subset of the semaphore, that is to say, anything you can do with a mutex can be done with a semaphore (simply set the count to one), but that there are things that can be done with a semaphore alone that cannot be done with just a mutex.
At the end of the day, any one synchronization primitive is generally enough to do anything (think of it as being "turing-complete" for thread synchronization, to bastardize that word). However, each is tailor-fit to a different application, and while you may be able to force one to do your bidding with some customization and additional glue, it is possible that a different synchronization primitive is better-fit for the job.
I have 3 questions about thread and process communication.
Can the Linux function msgget(), msgsnd(), and msgrcv() be invoked by multiple threads in one process? These functions in different threads are to attempt to access(r/w) one process' message queue. Are all race conditions supposed to be taken care by the system? If not, is there any good method to support threads and send a message to its main thread(process)?
Can semop() function be used to synchronize threads in one process?
There is a shared memory which have the following entities to access.
process
several threads in one process.
Do I have to use semaphore of inter-process level and a semaphore of threads level at the same time? Any simple way to handle this?
A lot of question. :) thanks.
Can the Linux function msgget(), msgsnd(), and msgrcv() be invoked by multiple threads in one process?
You do not need to worry about race conditions, the system will take care of that, there is no race condition with these calls.
can semop() function be used to synchronize threads in one process?
Yes, read more in the documentation
Do I have to use semaphore of inter-process level and a semaphore of threads level?
Any resource which is shared globally among threads or processes is subject to race conditions due to one or more threads or processes trying to access it at the very same time, So you need to synchronize the access to such a shared global resource.
What is the difference between semaphores and mutex provided by pthread library ?
semaphores have a synchronized counter and mutex's are just binary (true / false).
A semaphore is often used as a definitive mechanism for answering how many elements of a resource are in use -- e.g., an object that represents n worker threads might use a semaphore to count how many worker threads are available.
Truth is you can represent a semaphore by an INT that is synchronized by a mutex.
I am going to talk about Mutex vs Binary-Semaphore. You obviously use mutex to prevent data in one thread from being accessed by another thread at the same time.
(Assume that you have just called lock() and in the process of accessing a data. This means that, you don’t expect any other thread (or another instance of the same thread-code) to access the same data locked by the same mutex. That is, if it is the same thread-code getting executed on a different thread instance, hits the lock, then the lock() should block the control flow.)
This applies to a thread that uses a different thread-code, which is also accessing the same data and which is also locked by the same mutex.
In this case, you are still in the process of accessing the data and you may take, say, another 15 secs to reach the mutex unlock (so that the other thread that is getting blocked in mutex lock would unblock and would allow the control to access the data).
Do you ever allow another thread to just unlock the same mutex, and in turn, allow the thread that is already waiting (blocking) in the mutex lock to unblock and access the data? (Hope you got what I am saying here.)
As per agreed-upon universal definition,
with “mutex” this can’t happen. No other thread can unlock the lock
in your thread
with “binary-semaphore” this can happen. Any other thread can unlock
the lock in your thread
So, if you are very particular about using binary-semaphore instead of mutex, then you should be very careful in “scoping” the locks and unlocks, I mean, that every control-flow that hits every lock should hit an unlock call and also there shouldn’t be any “first unlock”, rather it should be always “first lock”.
The Toilet Example
Mutex:
Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.
"Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."
(A mutex is really a semaphore with value 1.)
Semaphore:
Is the number of free identical toilet keys.
For Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.
"A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
Source
mutex is used to avoid race condition between multiple threads.
whereas semaphore is used as synchronizing element used across multiple process.
mutex can't be replaced with binary semaphore since, one process waits for semaphore while other process releases semaphore. In case mutex both acquisition and release is handled by same.
The difference between the semaphore and mutex is the difference between mechanism and pattern. The difference is in their purpose (intent)and how they work(behavioral).
The mutex, barrier, pipeline are parallel programming patterns. Mutex is used(intended) to protect a critical section and ensure mutual exclusion. Barrier makes the agents(thread/process) keep waiting for each other.
One of the feature(behavior) of mutex pattern is that only allowed agent(s)(process or thread) can enter a critical section and only that agent(s) can voluntarily get out of that.
There are cases when mutex allows single agent at a time. There are cases where it allows multiple agents(multiple readers) and disallow some other agents(writers).
The semaphore is a mechanism that can be used(intended) to implement different patterns. It is(behavior) generally a flag(possibly protected by mutual exclusion). (One interesting fact is even mutex pattern can be used to implement semaphore).
In popular culture, semaphores are mechanisms provided by kernels, and mutexes are provided by user-space library.
Note, there are misconceptions about semaphores and mutexes. It says that semaphores are used for synchronization. And mutexes has ownership. This is due to popular OS books. But the truth is all the mutexes, semaphores and barriers are used for synchronization. The intent of mutex is not ownership but mutual exclusion. This misconception gave the rise of popular interview question asking the difference of the mutexes and binary-semaphores.
Summary,
intent
mutex, mutual exclusion
semaphore, implement parallel design patterns
behavior
mutex, only the allowed agent(s) enters critical section and only it(they) can exit
semaphore, enter if the flag says go, otherwise wait until someone changes the flag
In design perspective, mutex is more like state-pattern where the algorithm that is selected by the state can change the state. The binary-semaphore is more like strategy-pattern where the external algorithm can change the state and eventually the algorithm/strategy selected to run.
This two articles explain great details about mutex vs semaphores
Also this stack overflow answer tells the similar answer.
Semaphore is more used as flag, for which your really don't need to bring RTOS / OS. Semaphore can be accidentally or deliberately changed by other threads (say due to bad coding).
When you thread use mutex, it owns the resources. No other thread can ever access it, before resource get free.
Mutexes can be applied only to threads in a single process and do not work between processes as do semaphores.
Mutex is like sempaphore with with S=1.
You can control number of concurrent accesses with semaphore but with mutex only one process at a time can access it.
See the implemenation of these two below: (all functions are atomic)
Semaphore:
wait(S) {
while (S <= 0 )
; // busy wait
S--;
}
signal(S) {
S++;
}
Mutex:
acquire() {
while (!available)
; // busy wait
available = false;
}
release() {
available = true;
}