How to implement the Reader Writer problem, where only one reader is allowed at a time, and only if no writer wants to modify the shared structure?
Reader:
wait(mutex)
wait(w)
// Read
signal(w)
signal(mutex)
Writer:
wait(w)
wait(mutex)
// Write
signal(w)
signal(mutex)
Does this solution make any sense?
Thread priorities are your friend here, plus if you're being strict about it the PREEMPT_RT kernel patch set too. Make the writer a higher priority than the readers.
I'm presuming you have two semaphores to a) guard access to the structure (mutex), and b) flag that the structure has been updated (w). In which case you don't need to wait for w in the writer, and you don't need to signal w in the reader. The reader should wait for w, and then wait for mutex, read, then post mutex. The writer should wait for mutex, write, and then signal mutex and w.
The the priority of the writer thread and the PREEMPT_RT kernel (which resolves priority inversions) means that the writer will be given mutex ASAP, no matter what the reader is doing (in fact the priority of reader will be temporarily bumped upwards to ensure that it gets to the point of signalling mutex as quickly as possible).
Related
I'm a beginner in C and Multithreading programming.
My textbook describes a readers-writers problem, which favors readers, requires that no reader be kept waiting unless a writer has already been granted permission to use the object. In other words, no reader should wait simply because a writer is waiting, below is the code
where
void P(sem_t *s); /* Wrapper function for sem_wait */
void V(sem_t *s); /* Wrapper function for sem_post */
and
The w semaphore controls access to the critical sections that access the shared object. The mutex semaphore protects access to the shared readcnt variable, which counts the number of readers currently in the critical section.
I don't quite understand what the textbook mean. It seems to me like:if there is a reader, then the writer won't be able to update the share object. But my textbooks says 'no reader should wait simply because a writer is waiting', but when a writer is writing, it just locks the w which doesn't do anything to stop readers except for first reader?
Lets say there are four threads T1,T2,T3,T4.
Tl is writer and T2,T3,T4 are readers.
Now lets assume that T1 gets scheduled first and locks the semaphore w and once done it releases w.
Now after this, lets assume T2 gets scheduled and it locks the semaphore mutex,increments the global variable readcnt and since this is the first reader it locks the semaphore w as well.And releases the semaphore mutex.And enters the crticial section.
Now if T3 gets scheduled, it will acquire the semaphore mutex and increments the global variable readcntand releases the semaphore mutex then enters the critical section.
Now if T1 gets scheduled again it cannot acquire w since it held by the reader thread T2. T1 cannot acquire w until the last reader thread exits. If the writer T1 is waiting and at the same time T4 gets scheduled, T4 will execute as it does not require to lock w.
So the textbooks says no reader should wait simply because a writer is waiting.
But as Leonard said in the other answer, if the writer is already writing, you can't interrupt that process. So T2,T3,T4 has to wait for T1 to release w.
As in the first comment made, if the writer is already writing, you can't interrupt that process. The case you need to deal with is if something is happening (reading or writing) and then a writer request arrives, and then a reader request arrives while the original lock is being held, you need to ensure that the reader request gets serviced first. This is a tiny bit complex, but you're trying to learn, so thinking about how to do this is really the assignment. Go for it! Try some code. If you are truly stuck, ask again.
I'm trying to figure out a way to put some thread in a passive waiting mode and wake them up as they arrive to the barrier. I have a fixed amount of thread that should arrive.
I was first thinking about a semaphore that i would initialise at 0 so it would block but they will be released in a random way. I would like to implement a system that would release the thread in the order the came to the barrier of synchronisation like a FIFO.
I also thought about using 2 semaphore, on that would block, release a thread and sort it. If the thread is the good one then it just goes, if not then it's blocked by the second semaphore. However this system seems kind of long and fastidious.
Does someone have an idea or suggestions that would help me ?
Thank you very much :)
On Linux, you can just use a condition variable and a mutex to block and unblock threads in the same FIFO order.
This is because all waiters on a condition variable append to the futex wait queue in the kernel in order. Waking up the waiters happens in the same FIFO order. As long as you keep the mutex locked while signaling the condition variable.
However, as commenters mentioned, this is a poor idea to depend on thread execution order.
I am learning about POSIX threads and my professor has started teaching about the first readers-writers problem. This is the pseudocode I have about solving the problem (only for the first case: reader's preference).
semaphore rw_mutex = 1; /* semaphore common to both reader & writer */
semaphore mutex = 1; /* semaphore for reading (reader lock) */
int read_count = 0; /* track number of readers in CS */
Writer:
do {
lock(rw_mutex);
/* ensure no writer or reader can enter */
...
/* writing is performed */
...
unlock(rw_mutex);
/* release lock */
} while (true);
Reader:
do
{
lock(mutex);
/* first update read_count atomically */
read_count++;
if (read_count == 1) {
lock(rw_mutex);
/* ensure no writer can enter */
}
unlock(mutex);
/* allow other readers to access */
...
/* reading is performed */
...
lock(mutex);
read_count--;if (read_count == 0) unlock(rw_mutex);
/* allow writers after
last reader has left the CS */
unlock(mutex);
/* release lock */
} while(true);
First of all this is my understanding of mutex locks: Once we create a lock and unlock pair, the code between these two entities can only be accessed by a single thread at a time.
Now if my understanding is right, then I can pretty much understand what's happening in the Writer section of the above pseudocode. We are locking and then writing to the shared resource and in the meanwhile, no one can access the shared resource since it's locked and then we simply unlock it.
But I have problems understanding the reader part. If we lock once, it means that it's locked for good until we unlock it again right? In that case, what's the use of locking twice in reader's section?
My main question is this:
What does locking mean? and what's the difference between say lock(rw_mutex) and lock(mutex) in the above pseudocode? If once we call a lock, the program should lock it regardless of what parameter we pass in right? So what do these parameters: rw_mutex and mutex mean here? How does multiple mutex locking work?
The way to think about mutexes is like this: a mutex is like a token that at any point in time can either be held by one thread, or available for any thread to take.
When a thread calls lock() on a mutex, it is attempting to take the mutex: if the mutex is available ("unlocked") then it will take it straight away, otherwise if it is currently held by another thread ("locked") then it will wait until it is available.
When a thread calls unlock() on a mutex, it is returning a mutex that it currently holds so that it is available for another thread to take.
If you have more than one mutex, each mutex is independent: a thread can hold neither, one or both of them.
In your Reader, a thread first acquires mutex. While mutex is owned by the thread, no other thread can acquire mutex, so no other thread can be executing between either of the lock(mutex); / unlock(mutex); pairs (one at the top of the Reader function and one further down). Because read_count is only ever accessed within such a pair (while mutex is held), we know that only one thread will access read_count at a time.
If the Reader has just incremented read_count from zero to one, it will also acquire the rw_mutex mutex. This prevents any other thread from acquiring that mutex until it has been released, which has the effect of preventing Writer from proceeding into its critical section.
This code effectively passes ownership of the rw_mutex from the thread that locked it in Reader to any remaining readers in the critical section, when that thread leaves the critical section. This is just a matter of the code logic - no actual call is required to do this (and it's only possible because it is using a semaphore to implement rw_mutex, and not for example a pthreads mutex, which must be released by the thread that locked it).
I have one sender thread and 40 worker threads. There is a single queue. All of the 40 threads write to the queue and the sender thread periodically reads from the shared queue and sends the data read over a tcp socket (say after every 1 sec, the sender thread must read data from the queue and send it over the socket). I have a question here:
If any of the 40 threads is in the critical section and all other threads are also waiting to enter the critical section and at the same time 1 sec timer is up and I want to ignore the requests of all other threads to enter the critical section, and the Sender thread must be given priority now and must be given the critical section.
In other words I want to set the priority of sender thread to 1 i.e. when sender thread calls EnterCriticalSection() then, all other threads that are waiting to enter critical section must be ignored and as soon as the critical section gets free, it must be given to the sender thread.
Is there any way to achieve this functionality?
You can not achieve it by using just priority, because if any of worker thread is holding a lock then priority can not force them to free it. Here is one implementation I can think off..
As soon as the sender thread will wake up after 1 sec duration it will send a signal to the worker process. And in the signal handler free the lock that is held by the workers(I guess a binary semaphore would be good here, so put it's value to 0 in the signal handler), so whatever worker thread will try to access it will get blocked. At sender side send all the packets and at the end again set semaphore back to 1.
This is one implementation, you can think think your own like that but eventually it should work .:)
You likely just want some variant of a reader-writer lock. And probably just a plain Win32 critical section lock is all that is needed.
Here's why. The operations in the critical section, append data to a queue (or reading from the queue), is a non-blocking operation. In other words, no operation on the queue will take longer than a fraction of a millisecond. If you use the Windows critical section lock (EnterCriticalSection, LeaveCriticalSection), fairness is guaranteed to threads waiting to enter the CS (I'm fairly certain of this).
So if all 40 writer threads need to enter the CS to append to the queue, that shouldn't take more than a millisecond or two for the reader thread to wait it's turn to acquire the lock. This is of course assuming that writer threads are only copying memory a queue and are not doing any long blocking I/O operations while having acquired the lock.
Hope this helps in solving your problem http://man7.org/linux/man-pages/man3/pthread_getschedparam.3.html
One of the possible solutions to Your issue lies in The way Threads are implemented in Linux. Try and have a Mutex. Let your Sender thread create a named FIFO (using mkfifo() call), And, when you create say 40 worker threads, in their respective functions, make them create a named fifo for receiving. Whenever your Sender thread wants to communicate with one of your Worker thread, use open() call to open the worker_fifo and write onto it, close it. But When you have things like a User-Client Application thing, Whenever you open a fifo, put a Mutex Lock, do whatever you want (read/write) and Unlock the Mutex when you are done with it.
While reading about binary semaphore and mutex I found the following difference:
Both can have value 0 and 1, but mutex can be unlocked by the same
thread which has acquired the mutex lock. A thread which acquires
mutex lock can have priority inversion in case a higher priority
process wants to acquire the same mutex whereas this is not the case
with binary semaphore.
So where should I use binary semaphores? Can anyone cite an example?
EDIT: I think I have figured out the working of both. Basically binary semaphore offer synchronization whereas mutex offer locking mechanism. I read some examples from Galvin OS book to make it more clear.
One typical situation where I find binary semaphores very useful is for thread initialization where the thread will read from a structure owned by the parent thread. The parent thread needs to wait for the new thread to read the shared data from the structure before it can let the structure's lifetime end (by leaving its scope, for instance). With a binary semaphore, all you have to do is initialize the semaphore value to zero and have the child post it while the parent waits on it. Without semaphores, you'd need a mutex and condition variable and much uglier program logic for using them.
In almost all cases I use binary semaphore to signal other thread without locking.
Simple example of usage for synchronous request:
Thread 1:
Semaphore sem;
request_to_thread2(&sem); // Function sending request to thread2 in any fashion
sem.wait(); // Waiting request complete
Thread 2:
Semaphore *sem;
process_request(sem); // Process request from thread 1
sem->post(); // Signal thread 1 that request is completed
Note: You before post semaphore in thread 2 processing you can safely set thread 1 data without any additional synchronization.
The canonical example for using a counted semaphore instead of a binary mutex is when you have a limited number of resources available that are a) interchangeable and b) more than one.
For instance, if you want to allow a maximum of 10 readers to access a database at once, you can use a counted semaphore initialized to 10 to limit access to the resource. Each reader must acquire the semaphore before accessing the resource, decrementing the available count. Once the count reaches 0 (i.e. 10 readers have gained access to, and are stil using the database), all other readers are locked out. Once a reader finishes, they bump semaphore count back up by one to indicate that they are no longer using the resource and some other reader may now obtain the semaphore lock and gain access in their stead.
However, the counted semaphore, just like all other synchronization primitives, has many use cases and it's just a matter of thinking outside the box. You may find that many problems you are used to solving with a mutex plus additional logic can be more-easily and more-straightforwardly implemented with a semaphore. A mutex is a subset of the semaphore, that is to say, anything you can do with a mutex can be done with a semaphore (simply set the count to one), but that there are things that can be done with a semaphore alone that cannot be done with just a mutex.
At the end of the day, any one synchronization primitive is generally enough to do anything (think of it as being "turing-complete" for thread synchronization, to bastardize that word). However, each is tailor-fit to a different application, and while you may be able to force one to do your bidding with some customization and additional glue, it is possible that a different synchronization primitive is better-fit for the job.