I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process.
When thread B has finished it should inform thread A that it has finished.
How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take.
Thanks
If you relax the conditions that the buffer must be full before B starts processing it and that the buffer must be empty before A starts filling it again, then this is the classic producer-consumer problem.
If you cannot relax those conditions, then I do not see the benefit of separating the functionality between two threads. Since thread A cannot add to the buffer while thread B is processing, and thread B cannot do any processing while thread A is adding to the buffer, then all the work can be done in a single thread.
Yes, conditional variables and mutexes are two things you have to use when implement your solution.
You can take a look at the section "A few ways to use threads" on explanation how to do it.
How about using a posix semaphore to represent 'number of filled buffers'. The pointers could be passed over a shared ring buffer. Depending on how you want to handle overflows, you may need to protect it with a mutex.
Related
I'm trying to implement a simple mutex lock using a semaphore that does not fall victim to starvation. In order to do this, I'm pretty sure I need to implement some sort of queue or other first-in-first-out approach, but semaphores in C appear to not respect any sort of FIFO structure. Given that, I've not been able to decipher how to wake and sleep threads in a proper FIFO order? Then again, perhaps I'm barking up the entirely wrong tree in my approach.
This link, https://pubs.opengroup.org/onlinepubs/7908799/xsh/sem_post.html implies something with SCHED_FIFO might be able to resolve my issue, but my relative inexperience with C left me neither sure if that could resolve my problem nor how I'd implement my solution.
Does C have a way of enabling a FIFO "fair" semaphore to make a fair lock that avoids starvation, or do you need a seperate queuing system of some sort? And in either case, how would you approach its implementation?
Thanks for any input you can provide!
Are you only allowed to use a single semaphore as the means to block a thread? If so, then I don't think there is any pretty solution. Here's an ugly one: (pseudo-code)
queue.put(my_thread_id);
semaphore.dec();
while (queue.head() != my_thread_id) {
semaphore.inc();
sleep(VERY_SMALL_TIME_INTERVAL)
semaphore.dec();
}
(void) queue.pop();
...do whatever...
semaphore.inc();
Suppose that thread A releases the semaphore while threads B, C, and D are awaiting it. Exactly one of B, C, or D will awaken. It will look at the queue, and if its own thread ID is at the head of the queue, it will pop the ID and proceed to do whatever. Otherwise, it will awaken one of the other two, sleep for a bit, and then try again.
In this way, each of the threads will be awakened, one-by-one, until one of them sees its own ID and breaks out of the loop.
The sleep(...) is important. Without it, the fundamental unfairness of the semaphore would make it likely that the subsequent semaphore.dec() call would immediately succeed, and the same thread would keep going round the loop, not seeing its own ID, and starving the others. The sleep(...) blocks the caller, thereby encouraging the OS to waken one of the other waiting threads.
OTOH, are you using the Posix Threads Library (pthreads)? And are you allowed to use any means available to block and awaken waiting threads? In that case, you could use a condition variable instead of the semaphore. You'd still need an explicit queue, and you'd still need a loop, but you could get rid of the sleep(...) because pthread_cond_broadcast(...) simultaneously awakens all of the waiting threads.
Condition variables are a bit trickier than semaphores—easy to make mistakes. I suggest you google for a good tutorial if you want to go that way.
Almost every resource that I have looked up has talked about how to enforce mutual exclusion, or deal with the producer/consumer problem.
The problem is that I need to get certain threads to execute before other threads, but can't figure out how. I am trying to use semaphores, but don't really see how they can help in my case.
I have
a read thread,
N number of search threads, and
a write thread.
The read thread fills a buffer with data, then the search threads parse the data and outputs it to a different buffer, which the write thread then writes to a file.
Any idea as to how I would accomplish this?
I can post the code I have so far if anyone thinks that would help.
I think what you're looking for is a monitor
I would use a few condition variables.
You have read buffers. Probably two. If the search threads are slow you want read to wait rather than use all the memory on buffers.
So: A read_ready condition variable and a read_ready_mutex. Set to true when there is an open buffer.
Next: A search_ready condition variable and a search_ready_mutex. Set to true when there is a complete read buffer.
Next: A write_ready variable and a write_ready mutex. Set to true when there is work to do for the write thread.
Instead of true/false you could use integers of the number of buffers that are ready. As long as you verify the condition while the mutex is held and only modify the condition while the mutex is held, it will work.
[Too long for a comment]
Cutting this down to two assumptions:
Searchig cannot be done before reading has finished.
Writing cannot be done before searching has finished.
I conclude:
Do not use threads for reading and writing, but do it from the main thread.
Just do the the searching in parallel using threads.
Generally speaking, threads are used precisely when we don't care about the order of execution.
If you want to execute some statements S1, S2, ... , SN in that order, then you catenate them into a program that is run by a single thread: { S1; S2; ...; SN }.
The specific problem you describe can be solved with a synchronization primitive known as a barrier (implemented as the POSIX function pthread_barrier_wait).
A barrier is initialized with a number, the barrier count N. Threads which call the barrier wait operation are suspended, until N threads accumulate. Then they are are all released. One of the threads receives a return value which tells it that it is the "serial thread".
So for instance, suppose we have N threads doing this read, process-in-paralle, and write sequence. It goes like this (pseudocode):
i_am_serial = barrier.wait(); # at first, everyone waits at the barrier
if (i_am_serial) # serial thread does the reading, preparing the data
do_read_task();
barrier.wait(); # everyone rendezvous at the barrier again
do_paralallel_processing(); # everyone performs the processing on the data
i_am_serial = barrier.wait(); # rendezvous after parallel processing
if (i_am_serial)
do_write_report_task(); # serialized integration and reporting of results
I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.
I have a fixed size array (example: struct bucket[DATASIZE]) where at the very beginning I load information from a file. Since I am concerned about scalability and execution time, no dynamic array was used.
Each time I process half of the array I am free to replace those spots with more data from the file. I don't have a clear idea on how I would do that but I thought about pthreads to start 2 parallel tasks: one would be the actual data processing and the other one would make sure to fill out the array.
However, all the examples that I've seen on pthreads show that they are all working on the same task but concurrently. Is there a way to have them do separate things? Any ideas, thoughts?
You can definitely have threads doing different tasks. The pattern you're after is very common - it's called a Producer-Consumer arrangement.
What you are trying to do seems very similar to standard concurrent program called producer-consumer (look it up, you surely find an example in pthreads). This program has one fixed size buffer which is processed by consumer and filled by producer.
Yes, that's an excellent use for pthreads: it's one of the very things that pthreads was made for.
You might think about fork( )ing twice, once to create the process to do the data manipulation; and then a second fork( ) to create the process that fills in the blanks. Use a mutex to let each process protect the array from the other process and it will work fine.
Why would your array need a mutex? How would you set it up? When would each process need to acquire the mutex and when would it need to release the mutex?
-- pete
My program has one background thread that fills and swaps the back buffer of a double buffer implementation.
The main thread uses the front buffer to send out data. The problem is the main thread gets more processing time on average when I run the program. I want the opposite behavior since filling the back buffer is a more time consuming process then processing and sending out data to the client.
How can I achieve this with C POSIX pthreads on Linux?
In my experience, if, in the absence of prioritisation your main thread is getting more CPU then this means one of two things:
it actually needs the extra time, contrary to your expectation, or
the background thread is being starved, perhaps due to lock contention
Changing the priorities will not fix either of those.
have a look at pthread_setschedparam() --> http://www.kernel.org/doc/man-pages/online/pages/man3/pthread_setschedparam.3.html
pthread_setschedparam(pthread_t thread, int policy,
const struct sched_param *param);
You can set the priority in the sched_priority field of the sched_param.
Use pthread_setschedprio(pthread_t thread, int priority). But as in other cases (setschedparam or when using pthread_attr_t) your process should be started under root, if you want to change priorities (like nice utility).
You should have a look at the pthread_attr_t struct. It's passed as a parameter to the pthread_create function. It's used to change the thread attributes and can help you to solve your problem.
If you can't solve it you will have to use a mutex to prevent your main thread to access your buffer before your other thread swaps it.