Deadlock in producer-consumer using pthreads - c

I'm using pthreads to solve a producer-consumer problem. Basicaly the producer reads a file to a buffer and the consumers (at least one, but not limited) take entries from the buffer and operate them one by one.
This is the producer:
//...Local stuff...
if(file){
while(fgets(line, 256, file)){
pthread_mutex_lock(&buffer_mutex);
while(data->buffer->buffer_items == data->buffer->buffer_size){
pthread_cond_wait(&buffer_full_cv, &buffer_mutex);}
data->buffer->buffer_items);
reads++;
add_to_head(data->buffer, line);
pthread_cond_broadcast(&buffer_ready_cv);
pthread_mutex_unlock(&buffer_mutex);
}
pthread_mutex_lock(&buffer_mutex);
work = 0;
pthread_mutex_unlock(&buffer_mutex);
fclose(file);
}
And this is the consumer:
//...Local stuff...
while(1){
pthread_mutex_lock(&buffer_mutex);
while(data->buffer->buffer_items == 0){
if(work)
pthread_cond_wait(&buffer_ready_cv, &buffer_mutex);
else if(!work && !data->buffer->buffer_items)
pthread_exit(NULL);
}
remove_from_tail(data->buffer, string_to_check);
data->buffer->buffer_items);
pthread_cond_signal(&buffer_full_cv);
pthread_mutex_unlock(&buffer_mutex);
for(unsigned int i = 0; i < data->num_substrings; i++){
cur_occurrence = strstr(string_to_check, data->substrings[i]);
while(cur_occurrence != NULL){
pthread_mutex_lock(&buffer_mutex);
data->occurrences[i]++;
cur_occurrence++;
cur_occurrence = strstr(cur_occurrence, data->substrings[i]);
pthread_mutex_unlock(&buffer_mutex);
}
}
}
What seems to be happening is the file is completely read and there's still work to be done, but as the producer is not running anymore, the wait in the consumer never finishes.
PS.: I've also tried pthread_cond_signal instead of broadcast, but didn't work either.
Anyway, Is there something I'm missing here?

What seems to be happening is the file is completely read and there's still work to be done, but as the producer is not running anymore, the wait in the consumer never finishes.
Technically, this is not a deadlock. This is a common challenge with producer/consumer thread configurations. There are various ways to deal with this.
You could use a special buffer value (separate from the empty buffer) which signals that the producer has finished. (If you have multiple consumers, this special value has to be left in the buffer.) Such in-band signaling, while sometimes convenient to implement, is typically not a good design, though.
If you have multiple producers, you probably should combine the buffer with a counter of the number of producers running, essentially adding a semaphore to the buffer. If the count of producers reaches zero, the consumers need to exit.
The thread which spawns the producers and consumers could use pthread_cancel after joining all consumers, so that pthread_cond_wait is aborted. This is tricky to get completely right, though, and cancellation is best avoided in general.
Note that if you have multiple consumers, each one needs to broadcast the signal after it observed that the end-of-data state has been reached, so that the other consumers have a chance to observe it, too,

Related

How to rewrite GCD code as POSIX in C

This question is a follow-up to a question which happened to be more complex than I had initially thought would be. In a program I'm writing the main thread takes care of GUI-driven data updates, a producer thread (with a number of sub-threads, because the producer task is "embarrassingly parallel") writes to the circular buffer, while the real-time consumer thread reads from it. Original platform of development was OSX/Darwin, but I'd like to make the code more portable, UNIX source compatible. Everything can easily be written in POSIX, except for the following OSX-specific GCD command for which I can't estimate a POSIX equivalent, if any. It launches the producer thread, from which its subthreads are being launched programmatically, depending on the number of available logical CPU cores:
void dproducer (bool on, int cpuNum, uData* data)
{
if (on == true)
{
data->state = starting;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
producerSum(on, cpuNum, data);
});
}
return;
}
This is the block diagram of the program:
For clarity I'm also adding the producerSum code. It's an infinite loop whose execution can either wait for the consumer thread to do the work, or get interrupted by changing data->state, which has global scope:
void producerSum(bool on, int cpuNum, uData* data)
{
int rc;
pthread_t threads[cpuNum]; //subthreads
tData thread_args[cpuNum];
void* resulT;
static float frames [4096];
while(on){
memset(frames, 0, 4096*sizeof(float));
if( (fbuffW = (float**)calloc(cpuNum + 1, sizeof(float*)))!= NULL)
for (int i=0; i<cpuNum; ++i){
fbuffW[i] = (float*)calloc(data->frames, sizeof(float));
thread_args[i].tid = i; //ord. number of thread
thread_args[i].cpuCount = cpuNum; //counter increment step
thread_args[i].data = data;
rc = pthread_create(&threads[i], NULL, producerTN, (void *) &thread_args[i]);
if(rc != 0)printf("rc = %s\n",strerror(rc));
}
for (int i=0; i<cpuNum; ++i) rc = pthread_join(threads[i], &resulT);
//each subthread writes to fbuffW[i] and results get summed
for(UInt32 samp = 0; samp < data->frames; samp++)
for(int i = 0; i < cpuNum; i++)
frames[samp] += fbuffW[i][samp];
switch (data->state) { ... } //graceful interruption, resuming and termination mechanism
{ … } //simple code for copying frames[] to the circular buffer
pthread_cond_wait (&cond, &mutex);//wait for the real-time consumer
for(int i = 0; i < cpuNum; i++) free(fbuffW[i]); free(fbuffW);
} //end while(on)
return;
}
The syncing inside the producer thread is being successfully handled by pthread_create( ) and pthread_join( ), while necessary coordination between the producer and consumer threads is being successfully handled by a variable of pthread_mutex_t and a variable of pthread_cond_t (with corresponding locking, unlocking, broadcasting and waiting commands). uData is a program defined struct (or class instance). Any direction where to look at would help indeed.
Thanks for reading this post!
A dispatch queue is just what it sounds like: a queue, as in the standard FIFO list data structure. It holds tasks. Those tasks can be represented by Objective-C Blocks as in your code or by function pointers and context pointer values. You'll presumably need to avoid Blocks if you're aiming for cross-platform compatibility. In fact, since you only ever dispatch one task, your tasks can just encapsulate the parameters (on, cpuNum, and data) and not the code (the call to producerSum()).
The queues are serviced by threads from a thread pool. GCD manages the threads and pool. At least on OS X, there's integration with the OS such that the pool's size is governed by overall system load, which you won't be able to reproduce in a cross-platform manner.
Operations on a dispatch queue are thread-safe. This includes adding tasks to them and the worker threads removing tasks from them.
You're going to have to implement all of this. It's definitely possible, but it will be a bother. In many ways, the queues and the thread pool constitute a producer-consumer architecture. Basically, your GCD-based solution was a bit of a cheat because you just used a producer-consumer API to implement your producer-consumer design. Now, you're going to have to really implement a producer-consumer design without the crutch.
There's basically no more to it than the thread-creation and POSIX condition variables you're already using.
dispatch_async() is basically just locking the mutex for the queue of tasks, adding the task to the queue, signalling the condition variable, and unlocking the mutex. Each worker thread will just wait on the condition variable and, when it wakes, lock the mutex, pop a task off the queue if there's one, unlock the mutex, and run the task if it got one. You probably also need a mechanism to signal the worker thread that it's time to gracefully terminate.

Rerunning cancelled pthread

My problem is that I cannot reuse cancelled pthread. Sample code:
#include <pthread.h>
pthread_t alg;
pthread_t stop_alg;
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
void *algorithm() {
while (1) {
printf("I'm here\n");
}
}
int main() {
thread_available = 0;
pthread_create(&stop_alg, NULL, stopAlgorithm, 0);
while (1) {
sleep(1);
if (thread_available == 0) {
sleep(2);
printf("Starting algorithm\n");
pthread_create(&alg, NULL, algorithm, 0);
thread_available = 1;
}
}
}
This sample should create two threads - one will be created at the program beginning and will try to cancel second as soon it starts, second should be rerunned as soon at it was cancelled and say "I'm here". But when algorithm thread cancelled once it doesn't start once again, it says "Starting algorithm" and does nothing, no "I'm here" messages any more. Could you please tell me the way to start cancelled(immediately stopped) thread once again?
UPD: So, thanks to your help I understood what is the problem. When I rerun algorithm thread it throws error 11:"The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process PTHREAD_THREADS_MAX would be exceeded.". Actually I have 5 threads, but only one is cancelled, others stop by pthread_exit. So after algorithm stopped and program went to standby mode I checked status of all threads with pthread_join - all thread show 0(cancelled shows PTHREAD_CANCELED), as far as I can understand this means, that all threads stopped successfully. But one more try to run algorithm throws error 11 again. So I've checked memory usage. In standby mode before algorithm - 10428, during the algorithm, when all threads used - 2026m, in standby mode after algorithm stopped - 2019m. So even if threads stopped they still use memory, pthread_detach didn't help with this. Are there any other ways to clean-up after threads?
Also, sometimes on pthread_cancel my program crashes with "libgcc_s.so.1 must be installed for pthread_cancel to work"
Several points:
First, this is not safe:
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
It's not safe for at least reasons. Firstly, you've not marked thread_available as volatile. This means that the compiler can optimise stopAlgorithm to read the variable once, and never reread it. Secondly, you haven't ensured access to it is atomic, or protected it by a mutex. Either declare it:
volatile sig_atomic_t thread_available;
(or similar), or better, protect it by a mutex.
But for the general case of triggering one thread from another, you are better using a condition variable (and a mutex), using pthread_condwait or pthread_condtimedwait in the listening thread, and pthread_condbroadcast in the triggering thread.
Next, what's the point of the stopAlgorithm thread? All it does is cancel the algorithm thread after an unpredictable amount of time between 0 and 6 seconds? Why not just sent the pthread_cancel from the main thread?
Next, do you care where your algorithm is when it is cancelled? If not, just pthread_cancel it. If so (and anyway, I think it's far nicer), regularly check a flag (either atomic and volatile as above, or protected by a mutex) and pthread_exit if it's set. If your algorithm does big chunks every second or so, then check it then. If it does lots of tiny things, check it (say) every 1,000 operations so taking the mutex doesn't introduce a performance penalty.
Lastly, if you cancel a thread (or if it pthread_exits), the way you start it again is simply to call pthread_create again. It's then a new thread running the same code.

While(1) loop reduce cpu without sleep

my question is that i am using rawsocket passing high rate ( larger than 50kpps) traffic, two threads, one is to send ( read from buffer), another one is to receive ( write to buffer).
i have to use while(1) loop to make sure an infinite loop, and i cannot use usleep since then i will loose packet ( i have tried that)... now the cpu usage is 100% and i think i am buring my cpu...
here is the code:
while (1)
{
if (sendIndex == PACKET_COUNT_MAX)
{
sendIndex = 0;
}
else if (ringBuffer[sendIndex].drop == 0)
{if(sendtosocket (ringBuffer, sendIndex, rawout) < 0)
a++;
else
sendIndex++;}
else if (ringBuffer[sendIndex].drop == 1) {
ringBuffer[sendIndex].header.free = 1;
memset (ringBuffer[sendIndex].data, 0, sizeof (ringBuffer[sendIndex].data));
sendIndex++;
}
else
{
a++;
}
//nanosleep((struct timespec[]){{0, 5}}, NULL);
}
Thanks in advance!!!!!!!
Lisa
You need to pass the control over to the kernel. The command you may find useful is select. Check out the whole story on http://manpages.courier-mta.org/htmlman2/select.2.html. For more info, http://www.gnu.org/software/libc/manual/html_node/Waiting-for-I_002fO.html.
It's all about knowing you have nothing else to do except wait for input from the network. Or the file system. Or anything else that is a file descriptor (U*ix lingo). So, you let the kernel awake you once you've got something to process.
You can try
Increasing the receive buffer
Slowing down the sendto with a sleep, that should not lose packets
Use the MSG_WAITALL flag with recvfrom to make it a blocking read, and make sure the socket was not opened with SOCK_NONBLOCK or O_NONBLOCK
You need sane synchronization between your threads. This includes:
Using locks of some kind to ensure that a variable isn't read by one thread while it is, or might be, modified in another.
Using some kind of waiting scheme so that the sending thread can wait for there to be work for it do without spinning.
Check out pthread_mutex_lock and pthread_cond_wait (assuming you're using POSIX threads).

Event object manual-reset, wrong thread synchronization

I'm approaching to C Windows programming in particular threads, concurrency and synchronization.
To experiment, I'm writing a C program that accepts N parameters.
Each parameter indicates a path to a file system directory tree and the program has to compare the content of all directories to decide whether all directories have the same content or not.
The main runs a "reading" thread for each parameter while a single "comparison" thread compares the name of all the entries found. For each file/directory found, "reading" threads synchronize themselves by activating the "comparison" thread.
I wrote the program with Semaphore objects and now I'm trying with Event objects.
The idea is to use N Events auto-reset and a single Event manual-reset.
The N events are used by the N "reading" threads to signal the "comparison" thread which is in WaitForMultipleObjects for an INFINITE time. When all the signals are available, it starts comparing the entry and then it performs a SetEvent() for the manual-reset object.
The "reading" threads wait for this set and then Reset the event and continue working with the next entry.
Some code for the N reading threads:
void ReadingTraverseDirectory(LPTSTR StartPathName, DWORD i) {
//variables and some work
do {
//take the next entry and put it in current_entry;
gtParams[it].entry = current_entry; //global var for comparison
SetEvent(glphReadingEvent[i]); //signal the comparison thread
WaitForSingleObject(ghComparisonEvent, INFINITE); //wait signal to restart working
ResetEvent(ghComparisonEvent); //reset the event
if (current_entry == TYPE_DIR) {
ReadingTraverseDirectory(current_entry, i); //recur to explor the next dir
}
} while (FindNextFile(SearchHandle, &FindData)); //while there are still entries
//
return;
}
Some code for the comparison thread:
DWORD WINAPI CompareThread(LPVOID arg) {
while (entries are equal){
WaitForMultipleObjects(N, glphReadingEvent, TRUE, 1000);
for (r = 0; r < nworkers - 1; r++){
if (_tcscmp(entries) != 0){
//entries are different. exit and close.
}
}
SetEvent(ghComparisonEvent);
}
}
The problem:
Sometimes it happens that one reading thread is able to work without respecting the synchro with other threads. If I put a printf() or Sleep(1) -between Wait and Set of the comparison thread-, the program works perfectly.
My opinion:
I think the manual-reset Event is not safe for this kind of (barrier)synchronization.
A reading thread may be too fast in ResetEvent() and if the scheduler slows down other threads, it is possible that some of them risk to stay blocked while the one which performed the Reset is able to continue its work.However if this is the case, the comparison thread should block itself on WaitingForMultipleObjects causing a deadlock... actually there is no deadlock but 1 thread is able to cycle more times respect to others.
What I'm trying to understand is why a simple Sleep(1) can solve the issue. Is it matter of scheduling or wrong implementation of synchronization?
Thank you.

how to run thread in main function infinitely without causing program to terminate

I have a function say void *WorkerThread ( void *ptr).
The function *WorkerThread( void *ptr) has infinite loop which reads and writes continously from Serial Port
example
void *WorkerThread( void *ptr)
{
while(1)
{
// READS AND WRITE from Serial Port USING MUXTEX_LOCK AND MUTEX_UNLOCK
} //while ends
}
The other function I worte is ThreadTest
example
int ThreadTest()
{
pthread_t Worker;
int iret1;
pthread_mutex_init(&stop_mutex, NULL);
if( iret1 = pthread_create(&Worker, NULL, WorkerThread, NULL) == 0)
{
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
}
if (stopThread != false)
stopThread = false;
pthread_mutex_destroy(&stop_mutex);
return 0;
}
In main function
I have something like
int main(int argc, char **argv)
{
fd = OpenSerialPort();
if( ConfigurePort(fd) < 0) return 0;
while (true)
{
ThreadTest();
}
return 0;
}
Now, when I run this sort of code with debug statement it runs fine for few hours and then throw message like "can't able to create thread" and application terminates.
Does anyone have an idea where I am making mistakes.
Also if there is way to run ThreadTest in main with using while(true) as I am already using while(1) in ThreadWorker to read and write infinitely.
All comments and criticism are welcome.
Thanks & regards,
SamPrat.
You are creating threads continually and might be hitting the limit on number of threads.
Pthread_create man page says:
EAGAIN Insufficient resources to create another thread, or a system-imposed
limit on the number of threads was encountered. The latter case may
occur in two ways: the RLIMIT_NPROC soft resource limit (set via
setrlimit(2)), which limits the number of process for a real user ID,
was reached; or the kernel's system-wide limit on the number of
threads, /proc/sys/kernel/threads-max, was reached.
You should rethink of the design of your application. Creating an infinite number of threads is not a god design.
[UPDATE]
you are using lock to set an integer variable:
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
However, this is not required as setting an int is atomic (on probably all architectures?). You should use a lock when you are doing not-atomic operations, eg: test and set
take_lock ();
if (a != 1)
a = 1
release_lock ();
You create a new thread each time ThreadTest is called, and never destroy these threads. So eventually you (or the OS) run out of thread handles (a limited resource).
Threads consume resources (memory & processing), and you're creating a thread each time your main loop calls ThreadTest(). And resources are finite, while your loop is not, so this will eventually throw a memory allocation error.
You should get rid of the main loop, and make ThreadTest return the newly created thread (pthread_t). Finally, make main wait for the thread termination using pthread_join.
Your pthreads are zombies and consume system resources. For Linux you can use ulimit -s to check your active upper limits -- but they are not infinite either. Use pthread_join() to let a thread finish and release the resources it consumed.
Do you know that select() is able to read from multiple (device) handles ? You can also define a user defined source to stop select(), or a timeout. With this in mind you are able to start one thread and let it sleeping if nothing occurs. If you intent to stop it, you can send a event (or timeout) to break the select() function call.
An additional design concept you have to consider is message queues to share information between your main application and/or pthread. select() is compatible with this technique so you can use one concept for data sources (devices and message queues).
Here a reference to a good pthread reading and the best pthread book available: Programming with POSIX(R) Threads, ISBN-13:978-0201633924
Looks like you've not called pthread_join() which cleans up state after non-detached threads are finished. I'd speculate that you've hit some per process resource limit here as a result.
As others have noted this is not great design though - why not re-use the thread rather than creating a new one on every loop?

Resources