I am making a theatre - reservation application for a project. My theatre contains 10 operators, so everytime only 10 clients can make reservations simultaneously. When a client is done, another one is connected with an operator.
I have a server-client connection established. Everytime a new client appears i am creating a new thread. Now my problem is how to make 10 threads do their operation simultaneously. I know that i have to use condition vars but do not know how exactly.
What are my thoughts, whenever a client connects
lock mutex()
counter++;
if(counter > 10)
block thread() until an operator is free
else
do computations
unlock mutex()
I know i have to use cond_signal and cond_wait but i don't know how exactly. Any help?
Before calling the threads:
sem_t *sem;
sem = (sem_t*)malloc(sizeof(sem_t));
sem_init(sem, 0, 10);
Inside the threads:
sem_wait(sem);
do computations
sem_post(sem);
The last parameter of sem_init is how many threads will be allowed to be executed.
Every time you call sem_wait that value is decreased and if it becomes <0 the thread is paused until it becomes again positive.
when you call sem_post that value is increased by 1 and a new thread can execute the inner code.
Related
I am encountering an issue where I have a hard time telling which synchronization primitive I should use.
I am creating n parallel threads that work on a region of memory, each is assigned to a specific part of this region and can accomplish its task independently from the other ones. At some point tho I need to collect the result of the work of all the threads, which is a good case for using barriers, this is what I'm doing.
I must use one of the n worker threads to collect the result of all their work, for this I have the following code that follows the computation code in my thread function:
if (pthread_barrier_wait(thread_args->barrier)) {
// Only gets called on the last thread that goes through the barrier
// This is where I want to collect the results of the worker threads
}
So far so good, but now is where I get stuck: the code above is in a loop as I want the threads to accomplish work again for a certain number of loop spins. The idea is that each time pthread_barrier_wait unblocks it means all threads have finished their work and the next iteration of the loop / parallel work can start again.
The problem with this is that the result collector block statements are not guaranteed to execute before other threads start working on this region again, so there is a race condition. I am thinking of using a UNIX condition variable like this:
// This code is placed in the thread entry point function, inside
// a loop that also contains the code doing the parallel
// processing code.
if (pthread_barrier_wait(thread_args->barrier)) {
// We lock the mutex
pthread_mutex_lock(thread_args->mutex);
collectAllWork(); // We process the work from all threads
// Set ready to 1
thread_args->ready = 1;
// We broadcast the condition variable and check it was successful
if (pthread_cond_broadcast(thread_args->cond)) {
printf("Error while broadcasting\n");
exit(1);
}
// We unlock the mutex
pthread_mutex_unlock(thread_args->mutex);
} else {
// Wait until the other thread has finished its work so
// we can start working again
pthread_mutex_lock(thread_args->mutex);
while (thread_args->ready == 0) {
pthread_cond_wait(thread_args->cond, thread_args->mutex);
}
pthread_mutex_unlock(thread_args->mutex);
}
There is multiple issues with this:
For some reason pthread_cond_broadcast never unlocks any other thread waiting on pthread_cond_wait, I have no idea why.
What happens if a thread pthread_cond_waits after the collector thread has broadcasted? I believe while (thread_args->ready == 0) and thread_args->ready = 1 prevents this, but then see next point...
On the next loop spin, ready will still be set to 1 hence no thread will call pthread_cond_wait again. I don't see any place where to properly set ready back to 0: if I do it in the else block after pthread_cond_wait, there is the possibility that another thread that wasn't cond waiting yet reads 1 and starts waiting even if I already broadcasted from the if block.
Note I am required to use barriers for this.
How can I solve this issue?
You could use two barriers (work and collector):
while (true) {
//do work
//every thread waits until the last thread has finished its work
if (pthread_barrier_wait(thread_args->work_barrier)) {
//only one gets through, then does the collecting
collectAllWork();
}
//every thread will wait until the collector has reached this point
pthread_barrier_wait(thread_args->collect_barrier);
}
You could use a kind of double buffering.
Each worker would have two storage slots for results.
Between the barriers the workers would store their results to one slot while the collector would read results from the other slot.
This approach has a few advantages:
no extra barriers
no condition queues
no locking
slot identifier does not even have to be atomic because each thread could have it's own copy of it and toggle it whenever reaching a barrier
much more performant as workers can work when collector is processing the other slot
Exemplary workflow:
Iteration 1.
workers write to slot 0
collector does nothing because no data is ready
all wait for barrier
Iteration 2.
worker write to slot 1
collector reads from slot 0
all wait for barrier
Iteration 3.
workers write to slot 0
collector reads from slot 1
all wait for barrier
Iteration 4.
go to iteration 2
this is my code:
wait(){
while(S<=0)
//puts the thread in the block list until it wakes up(by calling post)
S = S-1
}
there is a while loop in the wait function of a semaphore, can't I use an if statement simply?
Because we can't assume that after a thread is woken up and it requires the lock another thread has not already come along and taken the resource this is guarding:
wait(){
Some lock_guard(mutex); // You lock here.
while(S<=0) {
condition.wait(lock_guard); // While you wait here
// the lock is released.
// when the condition/semaphore is signalled
// one or more threads may be released
// but they must aquire the lock before
// they return from wait.
//
// Another thread may enter this function
// aquire the lock and decrement S below
// before the waiting thread aquires the
// lock and thus mustbe resuspended.
}
S = S-1
}
Why is there a while loop in wait function of a semaphore, when if can be used too?
I take the
//puts the thread in the block list until it wakes up(by calling post)
comment as a place-holder for code that really does do what the comment describes, and the code overall to be meant as schematic for an implementation of a semaphore (else there is no semaphore to be found in it, and the [linux-kernel] tag also inclines me in this direction). In that event ...
Consider the case that two threads are blocked trying to decrement the semaphore. A third thread increments the semaphore to value 1, causing both of the first two to unblock. Only one of erstwhile-blocked threads can be allowed to decrement the semaphore at that point, else its value would drop below zero. The other needs to detect that it cannot proceed after all, and go back to waiting. That's what the loop accomplishes.
What you have here is called active waiting. Thread or process waits for variable S to change it value to 1 in order to access critical section. One IF would only check once and then go to futher instruction (in this case instruction from critical section, which would be huge error). Thats why it should wait in loop - in order to actually wait, not only check condition once.
But your code is not doing what you think it does.
while(S == 0) {}
or
while(S == 0);
would do the work. Your code constantly does S = S - 1 and with your condition it creates infinite loop. S in semaphores should never go lower than 0, as it would mean that one thread went to critical section without permisson.
I have a big problem with semaphores in C. Here is the link to inspiration of my code: http://cse.unl.edu/~ylu/csce351/notes/Solution%20for%20Building%20H2O.pdf.
There are two similar codes for hydrogen and oxygen. This is the idea: There are processes generated for oxygen and hydrogen and they are created in different time. When there are 2 hydrogens and 1 oxygen they call function bond(). But they have to wait for them. After the condition is evaluated as false it is supposed to switch to another process (or at least that is how I understand it). But in my code it continues to the next command which causes that it won't wait to all processes that I need. It prints to output after every process that is created even if it is supposed to wait. Does anyone know know whats wrong there?
(I can post more of the code if this is not enough.)
OXYGEN CODE:(hydrogen is similar)
sem_wait(mutex);
if ((*hydrogen >=2) && (*oxigen>=1))
{
(*count_c)++;
*count_cur_h-=2;
sem_post(hydrel);
sem_post(hydrel);
*count_cur_o-=1;
sem_post(oxrel);
}
else
{
(*count_c)++;
sem_post(mutex); // This is the place where it is supposed
// to release and continue to another process,
// but it goes to the next command.
}
sem_wait(oxrel);
bond();
sem_wait(barrier);
//semaphores are initialized like this:
sem_init(mutex,1,1);
sem_init(oxrel,1,1);
sem_init(hydrel,1,2);
sem_init(barrier,1,3);
sem_post is not a blocking call. sem_wait is the blocking call. If the value of semaphore is zero when sem_wait is called, the function that called it will block. sem_post is used to release another thread that is blocking waiting on sem_wait when the semaphore value is zero, but it does not block itself. The sem_post call is used to 'wake up a thread waiting on sem-wait' but then continues onwards, and both threads will then run at the same time (if you have at least 2 logical CPUs). If you want the thread that called sem_post to block at that point you will need to do something else (like add yet another semaphore).
I have the following problem to solve:
Consider an application where there are three types of threads: Calculus-A,Calculus-B and Finalization. Whenever a thread type Calculus-A ends, it calls the routine endA(), which returns immediately. Whenever a thread type Calculus-B ends, it calls the routine endB(), which returns immediately. Threads like Finalization routine call wait(),
which returns only if they have already completed two Calculation-A threads and 2 Calculation-B threads. In other words, for exactly 2 conclusions of Calculus-A and 2 conclusions of Calculus-B one thread Finalization is allowed to continue.
There is an undetermined number of threads of the 3 types. It is not known the order of the routines called by threads. Threads Completion are answered in the order of arrival.
Implement routines endA(), endB() and wait() using semaphores. Besides the variables initialization, the only possible operations are P and V. Solutions with busy-waiting are not acceptable.
Here's is my solution:
semaphore calcA = 2;
semaphore calcB = 2;
semaphore wait = -3;
void endA()
{
P(calcA);
V(wait);
}
void endB()
{
P(calcB);
V(wait);
}
void wait()
{
P(wait);
P(wait);
P(wait);
P(wait);
V(calcA);
V(calcA);
V(calcB);
V(calcB);
}
I believe that there will be a deadlock due to the wait's initialization and if and wait() executes before endA() and endB(). Is there any other solution for this?
I tend to view semaphore problems as problems where one must identify "sources of waiting" and define for each a semaphore and a protocol for their access.
With that in mind, the "sources of waiting" are
Completions of CalcA
Completions of CalcB
Maybe, if I understood this right, a wait on whole completion groups, consisting of two CalcAs and two CalcBs. I say maybe because I'm not sure what "Threads Completion are answered in the order of arrival." means.
Completions of CalcA and CalcB should therefore increment their respective counters. At the other end, one Finalization thread gains exclusive access to the counters and waits in any order for the needed number of completions to constitute a completion group. It then unlocks access to the next group.
My code is below, although since I'm unfamiliar with the Dutch V and P I will use take()/give().
semaphore calcA = 0;
semaphore calcB = 0;
semaphore groupSem = 1;
void endA(){
give(calcA);
}
void endB(){
give(calcB);
}
void wait(){
take(groupSem);
take(calcA);
take(calcA);
take(calcB);
take(calcB);
give(groupSem);
}
The groupSem semaphore ensures all-or-nothing: the thread that enters the critical section will get the next two completions of each of CalcA and CalcB. If groupSem wasn't there, the first thread to enter wait could take two As and block, then be taken over by another thread that grabs two As and two B and then run away.
A worse problem that exists if the groupSem isn't there is if this second thread takes two As, one B and then blocks, and then the first thread grabs the second B. If somehow the result of the finalization allows more runs of CalculationA and CalculationB, then you may have a deadlock, because there may be no more opportunity for instances of calculation A and B to complete, therefore leaving the finalization threads hanging, unable to produce more calculation instances.
I have an application that waits for clients to connect. Each time a client connects, a new frame gets created (with the new socket file descriptor). I know how many clients will connect, after I reach that number I just run pthread_join in a for loop.
My problem is that I would like the main thread to control all the other threads. My goal is to have each thread send the same message back to the client, at the same time, and only once. There are multiple messages a thread can send.
My current thinking is to define a list of command, as follows:
char *commands[] = {
(char*) "TERMINATE\0",
.... };
And then specify a command number that represents which command to use in that char* array. All threads will do something like
write(sockfd, buffer[commandNumber], length[commandNumber]);
I thought about waiting on a condition variable, but I see two problems:
1) I want to make sure that each thread, although synchronized, execute the command only once.
2) The main thread that initiates the command has to know when all those threads is done executing the command.
Only way I see to execute 2) is to keep track of a counter (with mutexes), and when each thread executes the command, it can increase that counter. I am not sure I will be able to avoid a thread from running the command twice.
What is the best possible way please to coordinate multiple threads to execute a single action at once; and also be able to know when that action has finished executing for every thread please?
You might use a barrier to gate the operation.
Synchronizing the send
The main thread initializes a barrier named "Ready" to N+1. Then it begins accept()ing N client connections, spawning a worker thread for each. The new worker threads immediately wait on barrier "Ready".
After spawning the Nth (and last) worker, the main thread sets the desired command (perhaps using a global commandNumber). Then the main thread waits on barrier "Ready". As soon as all workers and the main thread have arrived (reaching the barrier's limit of N+1), all threads are released, knowing that they are ready to issue their command immediately.
(A common alternate approach is to use a predicate and condition variable rather than a barrier. For example, the main thread might spawn the Nth worker and then cond_broadcast() that it has set a flag ready = 1. This approach is flawed. The main thread cannot know that the Nth worker — or, indeed, any of the workers — are yet waiting on that condition. The barrier solves this problem.)
Indicating completion
Another N+1 barrier, "AllDone", could be used to indicate that the workers are all done. A semaphore initialized to -N and posted by workers would do the same. Having the workers close() their connections and the main thread select()ing or poll()ing connections would convey the same information, too.