I'm building a project in c language (using openwrt as OS) to upload files to FTP server. and i'm using MQTT for the incoming data. So for each topic i subscribe to, i save this data and then upload it to the FTP server and to keep things going smoothly, each time i need to upload a file i just use a thread to do the job.
and to make sure the program doesn't run too much threads, each topic is allowed to create one thread. and i'm using a variable (like mutex but it's not pthread_mutex_t because i don't need to block a thread, i want to skip that step and upload the next file). i though with this technique i'm safe but after running the program for 15 min i get this error 11 which said resource temporarily unavailable when the program try to create a thread (pthread_create).
one of my tries to figure out what could be the problem is.
i used the pthread_join() function which is not an option in my condition but just to make sure that each thread is finished and not running in permanent loop. the program was running over than an hour and the error didn't show up again. and of course each thread was finished as intended.
and i'm 90% sure that each topic is creating only on thread and the next one will be create only if the previous one finished. (i was tracking the status of the variable before and after the thread creation).
i set the max thread from here "/proc/sys/kernel/threads-max" to 2000 (2000 is more than enough since i don't have too much topics)
upload function (this will create the thread):
void uploadFile(<args...>, bool* locker_p){
*locker_p = true;
args->uploadLocker_p = uploadLocker_p;
<do something here>
pthread_t tid;
int error = pthread_create(&tid, NULL, uploadFileThread, (void*)args);
if(0 != error){
printf("Couldn't run thread,(%d) => %s\n", error, strerror(error));
}
else{
printf("Thread %d\n", tid);
}
}
upload thread:
void *uploadFileThread(void *arg){
typeArgs* args = (typeArgs*)arg;
<do something like upload the file>
*(args->uploadLocker_p) = false;
free(args);
return NULL;
//pthread_exit(0);
}
The default stack size for the created threads is eating too much virtual memory.
Essentially, the kernel is telling your process that it has so much virtual memory already in use that it doesn't dare give it any more, because there isn't enough RAM and swap to back it up if the process were to suddenly use it all.
To fix, create an attribute that limits the per-thread stack to something sensible. If your threads do not use arrays as local variables, or do deep recursion, then 2*PTHREAD_STACK_MIN (from <limits.h>) is a good size.
The attribute is not consumed by the pthread_create() call, it is just a configuration block, and you can use the same one for any number of threads you create, or create a new one for each thread.
Example:
pthread_attr_t attrs;
pthread_t tid;
int err;
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 2 * PTHREAD_STACK_MIN);
err = pthread_create(&tid, &attrs, uploadFileThread, (void *)args);
pthread_attr_destroy(&attrs);
if (err) {
/* Failed, errno in err; use strerror(err) */
} else {
/* Succeeded */
}
Also remember that if your uploadFileThread() allocates memory, it will not be freed automatically when the thread exits. It looks like OP already knows this (as they have the function free the argument structure when it's ready to exit), but I thought it a good idea to point it out.
Personally, I like to use a thread pool instead. The idea is that the upload workers are created beforehand, and they'll wait for a new job. Here is an example:
pthread_mutex_t workers_lock;
pthread_mutex_t workers_wait;
volatile struct work *workers_work;
volatile int workers_idle;
volatile sig_atomic_t workers_exit = 0;
where struct work is a singly-linked list protected by workers_lock, workers_idle is initialized to zero and incremented when waiting for new work, workers_wait is a condition variable signaled when new work arrives under workers_lock, and workers_exit is a counter that when nonzero, tells that many workers to exit.
A worker would be basically something along
void worker_do(struct work *job)
{
/* Whatever handling a struct job needs ... */
}
void *worker_function(void *payload __attribute__((unused)))
{
/* Grab the lock. */
pthread_mutex_lock(&workers_lock);
/* Job loop. */
while (!workers_exit) {
if (workers_work) {
/* Detach first work in chain. */
struct work *job = workers_work;
workers_work = job->next;
job->next = NULL;
/* Work is done without holding the mutex. */
pthread_mutex_unlock(&workers_lock);
worker_do(job);
pthread_mutex_lock(&workers_lock);
continue;
}
/* We're idle, holding the lock. Wait for new work. */
++workers_idle;
pthread_cond_wait(&workers_wait, &workers_lock);
--workers_idle;
}
/* This worker exits. */
--workers_exit;
pthread_mutex_unlock(&workers_lock);
return NULL;
}
The connection handling process can use idle_workers() to check the number of idle workers, and either grow the worker thread pool, or reject the connection as being too busy. The idle_workers() is something like
static inline int idle_workers(void)
{
int result;
pthread_mutex_lock(&workers_lock);
result = workers_idle;
pthread_mutex_unlock(&workers_lock);
return result;
}
Note that each worker only holds the lock for very short durations, so the idle_workers() call won't block for long. (pthread_cond_wait() atomically releases the lock when it starts waiting for a signal, and only returns after it re-acquires the lock.)
When waiting for a new connection in accept(), set the socket nonblocking and use poll() to wait for new connections. If the timeout passes, examine the number of workers, and reduce them if necessary by calling reduce_workers(1) or similar:
void reduce_workers(int number)
{
pthread_mutex_lock(&workers_lock);
if (workers_exit < number) {
workers_exit = number;
pthread_cond_broadcast(&workers_wait);
}
pthread_mutex_unlock(&workers_lock);
}
To avoid having to call pthread_join() for each thread – and we really don't even know which threads have exited here! – to reap/free the kernel and C library metadata related to the thread, the worker threads need to be detached. After creating a worker thread tid successfully, just call pthread_detach(tid);.
When a new connection arrives and it is determined to be one that should be delegated to the worker threads, you can, but do not have to, check the number of idle threads, create new worker threads, reject the upload, or just append the work to the queue, so that it will "eventually" be handled.
Related
I'm in a situation where every given time interval (let's say 1 second) I need to generate a thread that follows a pre-determined set of actions. Then after it is done, somehow I need to clean up the resources associated with that thread. But I'm not sure how I can do this while still generating new threads, since pthread_join is blocking, so I can't keep generating new threads while waiting for others to finish.
The typical method I have seen suggested to do something like this is:
int i;
pthread_t threads[NUM_THREADS];
for (i = 0; i < NUM_THREADS; ++i) {
pthread_create(&threads[i], NULL, mythread, NULL);
}
for (i = 0; i < NUM_THREADS; ++i) {
pthread_join(threads[i], NULL);
}
However, I don't want to generate a pre-determined amount of threads at the start and let them run. I want to generate the threads one at a time, and just keep generating (this will be ok since the threads reach a saturation point where they just end at the first step if there's more than 100 of them). One solution I thought of is to have the pthread_joins running in their own thread, but then I'm not sure how to tell it which ones to join. These threads have randomised sleep times within them so there's no specified order in which they ought to finish. What I have in mind as to how the program should run is something like this:
Thread[1] created/running
Thread[2] created/running
Thread[3] created/running
Thread[2] finished -> join/free memory
new Thread[2] created/running (since 2 finished, now create a new thread 2)
So for example, you can never have more than 5 threads running, but every time one does finish you create a new one. The threads don't necessarily need to be in an array, I just thought that would make it easier to manage. I've been thinking about this for hours now and can't think of a solution. Am I just approaching the problem the completely wrong way, and there's something easier?
Main thread:
Initialize counting semaphore to 5
Semaphore down
Launch thread
Loop back
Worker thread:
Work
Cleanup
Semaphore up
Die
pthread_join is a function for joining threads, not to wait for them. Yes, the caller will wait until the thread returns from the starting function, but that is only a side effect.
Waiting for threads are accomplished via condition variables (pthread_cond_wait) or barriers (pthread_barrier_wait).
Another approach would be to introduce a global list, where the finished threads will be stored (before exiting the thread, add the thread to that list). And every interval (i assume from your main function), work off the list and join all threads. I don't think that i have to mention, that the list has to be protected by a mutex.
e.g.
struct thread_data {
struct thread_data *next; //single linked list instead of array
pthread_t id; //thread id
void *priv_data; //some data needed for other functionality
bool in_use; //test flag to indicate
//if the specific data/thread is in use/running
};
//mutex to protect the global garbage data
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
//global list for garbage collection, protected via 'lock'
struct thread_data *garbage = NULL;
//thread starting function
void* thread_func(void *arg)
{
//1. cast 'arg' to 'struct thread_data'
struct thread_data *data = (struct thread_data*) arg;
//2. do your other stuff
//3. before returning/exiting
pthread_mutex_lock(&lock);
//add data to garbage
data->next = garbage;
garbage = data;
pthread_mutex_unlock(&lock);
return NULL;
}
int main()
{
#define MAX_THREADS 5 //some arbitrary number
struct thread_data data[MAX_THREADS]; //main data storage
while (true) {
//1. your sleep/wait routine
//2. on wake up, collect garbage first
pthread_mutex_lock(&lock);
for (; garbage ; garbage = garbage->next) {
//destroy garbage->priv_data, if not already done somewhere else
pthread_join(garbage->id); //clear thread resources
garbage->in_use = false; //data no longer in use, free to reuse
}
pthread_mutex_unlock(&lock);
//3. iterate over data storage and if a reusable data is found,
//create the thread and pass the specific data
//as argument to the thread starting function
for (int i=0; i < MAX_THREADS; ++i) {
if (!data[i].in_use) {
data[i].in_use = true;
//data[i].priv_data = ...
pthread_create(&data[i].id, NULL, thread_func, &data[i]);
break; //we only want one thread at a time, right?
}
}
}
return 0;
}
You could call pthread_tryjoin_np for existing threads when you're about to create a new one. (This is a GNU extension to POSIX.)
You could create a list of completed threads, and join them and empty the list when you're about to create a new one. An easily-implemented stack could be used as the list.
You could detach the threads so they get cleaned up automatically. You will need a mechanism to wait for threads to complete before existing the main program, but that can easily be accomplished with a cond var.
My question is about exactly when the I/O thread in an asynchronous I/O call returns when a call back function is involved. Specifically, given this very general code for reading a file ...
#include<stdio.h>
#include<aio.h>
...
// callback function:
void finish_aio(sigval_t sigval) {
/* do stuff ... maybe close the file */
}
int main() {
struct aiocb my_aiocb;
int aio_return;
...
//Open file, take care of any other prelims, then
//Fill in file-specific info for my_aiocb, then
//Fill in callback information for my_aiocb:
my_aiocb.aio_sigevent.sigev_notify = SIGEV_THREAD;
my_aiocb.aio_sigevent.sigev_notify_function = finish_aio;
my_aiocb.aio_sigevent.sigev_notify_attributes = NULL;
my_aiocb.aio_sigevent.sigev_value.sival_ptr = &info_on_file;
// then read the file:
aio_return = aio_read(&my_aiocb);
// do stuff that doesn't need data that is being read ...
// then block execution until read is complete:
while(aio_error(&my_aiocb) == EINPROGRESS) {}
// etc.
}
I understand that the callback function is called as soon as the read of the file is completed. But what exactly happens then? Does the I/O thread start running the callback finish_aio()? Or does it spawn a new thread to handle that callback, while it returns to the main thread? Another way to put this would be: When does aio_error(&my_aiocb) stop returning EINPROGRESS? Is it just before the call to the callback, or when the callback is completed?
I understand that the callback function is called as soon as the read of the file is completed. But what exactly happens then?
What happens is that when the IO finishes it "behaves as if" it started a new thread (similar to calling pthread_create(&ignored, NULL, finish_aio, &info_on_file)).
When does aio_error(&my_aiocb) stop returning EINPROGRESS?
I'd expect that aio_error(&my_aiocb) stops returning EINPROGRESS as soon as the IO finishes, then the system (probably the standard library) either begins creating a new thread to call finish_aio() or "unblocks" a "previously created without you knowing" thread. However, I don't think the exact order is documented anywhere ("implementation defined") because it doesn't make much sense to call aio_error(&my_aiocb) from anywhere other than the finish_aio() anyway.
More specifically; if you're using polling (my_aiocb.aio_sigevent.sigev_notify = SIGEV_NONE) then you'd repeatedly check aio_error(&my_aiocb) yourself and you can't care if you're notified before or after this happens because you're not notified at all; and if you aren't using polling you'd wait until you are notified (via. a new thread or a signal) that there's a reason to check aio_error(&my_aiocb).
In other words, your finish_aio() would look more like this:
void finish_aio(sigval_t sigval) {
struct aiocb * my_aiocb = (struct aiocb *)sigval;
int status;
status = aio_error(&my_aiocb);
/* Figure out what to do (handle the error or handle the file's data) */
.. and for main() the while(aio_error(&my_aiocb) == EINPROGRESS) (which may waste a huge amount of CPU time for nothing) would be deleted and/or possibly replaced with something else (e.g. a pthread_cond_wait() to wait until the code in finish_aio() does a pthread_cond_signal() to tell the main thread it can continue).
To understand this, let's take a look at what pure polling would look like:
int main() {
struct aiocb my_aiocb;
int aio_return;
...
//Open file, take care of any other prelims, then
//Fill in file-specific info for my_aiocb, then
my_aiocb.aio_sigevent.sigev_notify = SIGEV_NONE; /* CHANGED! */
// my_aiocb.aio_sigevent.sigev_notify_function = finish_aio;
// my_aiocb.aio_sigevent.sigev_notify_attributes = NULL;
// my_aiocb.aio_sigevent.sigev_value.sival_ptr = &info_on_file;
// then read the file:
aio_return = aio_read(&my_aiocb);
// do stuff that doesn't need data that is being read ...
// then block execution until read is complete:
while(aio_error(&my_aiocb) == EINPROGRESS) {}
finish_aio(sigval_t sigval); /* ADDED! */
}
In this case it behaves almost the same as your original code, except that there's no extra thread (and you can't care if the "thread that doesn't exist" is started before or after aio_error(&my_aiocb) returns a value other than EINPROGRESS).
The problem with pure polling is that the while(aio_error(&my_aiocb) == EINPROGRESS) could waste a huge amount of CPU time constantly checking when nothing has happened yet.
The main purpose of using my_aiocb.aio_sigevent.sigev_notify = SIGEV_THREAD is to avoid wasting a possibly huge amount of CPU time polling when nothing changed (not forgetting that in some cases wasting CPU time polling like this can prevent other threads, including the finish_aio() thread, from getting CPU time). In other words, you want to delete the while(aio_error(&my_aiocb) == EINPROGRESS) loop, so you used SIGEV_THREAD so that you can delete that polling loop.
The new problem is that (if the main thread has to wait until the data is ready) you need some other way for the main thread to wait until the data is ready. However, typically it's not "the aio_read() completed" that you actually care about, it's something else. For example, maybe the raw file data is a bunch of values in a text file (like "12, 34, 56, 78") and you want to parse that data and create an array of integers, and want to notify the main thread that the array of integers is ready (and don't want to notify the main thread if you're starting to parse the file's data). It might be like:
int parsed_file_result = 0;
void finish_aio(sigval_t sigval) {
struct aiocb * my_aiocb = (struct aiocb *)sigval;
int status;
status = aio_error(&my_aiocb);
close(my_aiocb->aio_fildes);
if(status == 0) {
/* Read was successful */
parsed_file_result = parse_file_data(); /* Create the array of integers */
} else {
/* Read failed, handle the error somehow */
parsed_file_result = -1; /* Tell main thread we failed to create the array of integers */
}
/* Tell the main thread it can continue somehow */
}
One of the best ways to tell the main thread it can continue (at the end of finish_aio()) is to use pthread conditional variables (e.g. pthread_cond_signal() called at the end of finish_aio(); with pthread_cond_wait() in the main thread). In this case the main thread will simply block (the kernel/scheduler will not give it any CPU time until pthread_cond_signal() is called) so it wastes no CPU time polling.
Sadly, pthread conditional variables aren't trivial (they require a mutex, initialization, etc), and teaching/showing their use here is a little too far from the original topic. Fortunately; you shouldn't have much trouble finding a good tutorial elsewhere.
The important part is that if you used SIGEV_THREAD (so that you can delete that awful while(aio_error(&my_aiocb) == EINPROGRESS) polling loop) you're left with no reason to call aio_error(&my_aiocb) until after the finish_aio() has already been started; and no reason to care if aio_error(&my_aiocb) would've been changed (or not) before finish_aio() is started.
I'm currently learning concurrent programming. I have two threads and I want them to act like this:
the first thread runs N times while the second one waits
when the first one is done, the second does its job and the first waits
when the second thread is done, repeat.
I'm trying to do it in C using pthread library. Here's some pseudocode (hopefully understandable)
int cnt = 0;
void* thread1(){
while(1){
// thread 1 code
cnt ++;
if(cnt == N){
let_thread2_work();
wait_thread2();
}
}
}
void* thread2(){
while(1){
wait_thread1();
// thread2 code
cnt = 0;
let_thread1_work();
}
}
Can anyone please help me ?
Like David Schwartz commented, this doesn't make a ton of sense with just thread1 and thread2 waiting on each other, not actually doing any work in parallel. But maybe eventually you want multiple "thread1"s, all processing jobs at the same time until they finish a batch of N jobs, then they all stop and wait for "thread2" to do some kind of post-processing before the pool of worker threads start back up.
In this situation I would consider using a couple condition variables, one for your worker threads to communicate to your post-processing thread that they're waiting, and one for your post-processing thread to tell the workers to start working again. You can declare a condition variable and a helper mutex in the global scope right next to your "cnt" variable, which I'm calling "jobs_done" for clarity:
#include<pthread.h>
#DEFINE NUM_WORKERS 1 // although it doesn't make sense to just have 1
#DEFINE BATCH_SIZE 50 // this is "N"
// we're going to keep track of how many jobs we have done in
// this variable
int jobs_done = 0;
// when a worker thread checks jobs_done and it's N or greater,
// that means we have to wait for the post-processing thread to set
// jobs_done back to 0. so the worker threads "wait" on the
// condition variable, and the post-processing thread "broadcasts"
// to the condition variable to wake them all up again once it's
// done its work
pthread_cond_t jobs_ready_cv;
// we're going to use this helper mutex. whenever any thread
// reads or writes to the jobs_done variable, we have to lock this
// mutex. that includes the worker threads when they check to see if
// they're ready to wake up again.
pthread_mutex_t jobs_mx;
// here's how the worker threads will communicate to the post-process
// thread that a batch is done. to make sure that all N jobs are fully
// complete before postprocessing happens, we'll use this variable to
// keep track of how many threads are waiting for postprocessing to finish.
int workers_done = 0;
// we'll also use a separate condition variable and separate mutex to
// communicate to the postprocess thread.
pthread_cond_t workers_done_cv;
pthread_mutex_t workers_done_mx;
Then in your setup code, initialize the condition variables and helper mutexes:
int main() { // or something
pthread_cond_init(&jobs_ready_cv, NULL);
pthread_mutex_init(&jobs_mx, NULL);
pthread_cond_init(&workers_done_cv, NULL);
pthread_mutex_init(&workers_done_mx, NULL);
...
}
So, your worker threads (or "thread1"), before taking a job, will check to see how many jobs have been taken. If N (here, BATCH_SIZE) have been taken, then it updates a variable to indicate that it has no work left to do. If it finds that all of the worker threads are done, then it signals the postprocess thread ("thread2") through workers_done_cv. Then, the thread waits for a signal from the postprocess thread through `
void* worker_thread(){
while(1){
/* first, we check if the batch is complete. we do this first
* so we don't accidentally take an extra job.
*/
pthread_mutex_lock(&jobs_mx);
if (jobs_done == BATCH_SIZE) {
/* if BATCH_SIZE jobs have been done, first let's increment workers_done,
* and if all workers are done, let's notify the postprocess thread.
* after that, we release the workers_done mutex so the postprocess
* thread can wake up from the workers_done condition variable.
*/
pthread_mutex_lock(&workers_done_mx);
++workers_done;
if (workers_done == NUM_WORKERS) {
pthread_cond_broadcast(&workers_done_cv);
}
pthread_mutex_unlock(&workers_done_mx);
/* now we wait for the postprocess thread to do its work. this
* unlocks the mutex. when we get the signal to start doing jobs
* again, the mutex will relock automatically when we wake up.
*
* note that we use a while loop here to check the jobs_done
* variable after we wake up. That's because sometimes threads
* can wake up on accident even if no signal or broadcast happened,
* so we need to make sure that the postprocess thread actually
* reset the variable. google "spurious wakeups"
*/
while (jobs_done == BATCH_SIZE) {
pthread_cond_wait(&jobs_ready_cv, &jobs_mx);
}
}
/* okay, now we're ready to take a job.
*/
++jobs_done;
pthread_mutex_unlock(&jobs_mx);
// thread 1 code
}
}
Meanwhile, your postprocess thread waits on the workers_done_cv immediately, and doesn't wake up until the last worker thread is done and calls pthread_cond_broadcast(&workers_done_cv). Then, it does whatever it needs to, resets the counts, and broadcasts to the worker threads to wake them back up.
void* postprocess_thread(){
while(1){
/* first, we wait for our worker threads to be done
*/
pthread_mutex_lock(&workers_done_mx);
while (workers_done != NUM_WORKERS) {
pthread_cond_wait(&workers_done_cv, &workers_done_mx);
}
// thread2 code
/* reset count of stalled worker threads, release mutex */
workers_done = 0;
pthread_mutex_unlock(&workers_done_mx);
/* reset number of jobs done and wake up worker threads */
pthread_mutex_lock(&jobs_mx);
jobs_done = 0;
pthread_mutex_unlock(&jobs_mx);
pthread_cond_broadcast(&jobs_ready_cv);
}
}
Also take heed of David Schwartz's advice that you probably don't actually need the postprocessing thread to wait on the worker threads. If you don't need this, then you can get rid of the condition variable that makes the worker threads wait for the postprocess thread, and this implementation becomes a lot simpler.
edit: mutex protected the assignment to jobs_done in postprocess_thread(), added a forgotten ampersand
One solution is to use mutex:
https://www.geeksforgeeks.org/mutex-lock-for-linux-thread-synchronization/
Another solution, but using semaphore:
https://www.geeksforgeeks.org/difference-between-binary-semaphore-and-mutex/?ref=rp
Or creating own lightthread schedule.
for exemple:
http://www.dunkels.com/adam/pt/ [edited]
I've been working on a project lately, and i need to manage a pair of thread pools.
What the worker threads in the pools do is basically execute some kind of pop operation to each respective queue, eventually wait on a condition variable (pthread_cond_t) if there is no available value in the queue, and once they get an item, parse it and execute operations accordingly.
What i'm concerned about is the fact that i want to have no memory leaks, and to achieve that i noticed that calling a pthread_cancel on each thread when the main process is exiting is definitely a bad idea, as it leaves a lot of garbage around.
The point is, my first thought was to use a exit flag which i can set when the threads need to exit, so that they can easily free memory and call a pthread_exit...
I guess i should set this flag, then send a broadcast signal to the threads waiting on the condition variable and check the flag right after the pop operation...
Is this really the correct way to implement a good thread pool termination? I don't feel that much confident about this...
I'm writing some pseudo-code here to explain what i'm talking about
Each pool thread will run some code structured like this:
/* Worker thread (which will run on each pool thread) */
{ /* Thread initialization */ ... }
loop {
value = pop();
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
{ /* Elaborate value */ ... }
}
return NULL;
And there will be some kind of pool_stopRunning() function which will look like:
/* pool_stopRunning() function code */
{ /* Acquire flag mutex */ ... }
setFlag(flag);
{ /* Unlock flag mutex */ ... }
{ /* Acquire queue mutex */ ... }
pthread_cond_broadcast(...);
{ /* Unlock queue mutex */ ... }
Thanks in advance, i just need to be sure that there isn't a fancy-er way to stop a thread pool... (or get to know a better way, by any chance)
As always, i'm sorry if there is any typo, i'm not and english speaker and it's kind of late right now >:
What you are describing will work, but I would suggest a different approach...
You already have a mechanism for assigning tasks to threads, complete with all appropriate synchronization. So instead of complicating the design with some new parallel mechanism, just define a new type of task called "STOP". If there are N threads servicing a queue and you want to terminate them, push N STOP tasks onto the queue. Then just wait for all of the threads to terminate. (This last can be done via "join", so it should not require any new mechanism, either.)
No muss, no fuss.
With respect to symmetry with setting the flag and reducing serialization, this code:
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
should look like this:
{ /* Mutex lock because of the shared flag */ ... }
flagcopy = readFlag();
{ /* Unlock the mutex */ ... }
if (flagcopy) {{ /* Free memory ... } pthread_exit(); }
Having said that, you can (should?) factor the mutex code into the setFlag and readFlag methods.
There is one more thing. If the flag is only a boolean and it is only changed once before the whole thing shuts down (i.e., it's never unset after being set), then I would argue that protecting the read with a mutex is not required.
I say this because if the above assumptions are true and if the loop's duration is very short and the loop iteration frequency is high, then you would be imposing undue serialization upon the business task and potentially increasing the response time unacceptably.
I have a function say void *WorkerThread ( void *ptr).
The function *WorkerThread( void *ptr) has infinite loop which reads and writes continously from Serial Port
example
void *WorkerThread( void *ptr)
{
while(1)
{
// READS AND WRITE from Serial Port USING MUXTEX_LOCK AND MUTEX_UNLOCK
} //while ends
}
The other function I worte is ThreadTest
example
int ThreadTest()
{
pthread_t Worker;
int iret1;
pthread_mutex_init(&stop_mutex, NULL);
if( iret1 = pthread_create(&Worker, NULL, WorkerThread, NULL) == 0)
{
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
}
if (stopThread != false)
stopThread = false;
pthread_mutex_destroy(&stop_mutex);
return 0;
}
In main function
I have something like
int main(int argc, char **argv)
{
fd = OpenSerialPort();
if( ConfigurePort(fd) < 0) return 0;
while (true)
{
ThreadTest();
}
return 0;
}
Now, when I run this sort of code with debug statement it runs fine for few hours and then throw message like "can't able to create thread" and application terminates.
Does anyone have an idea where I am making mistakes.
Also if there is way to run ThreadTest in main with using while(true) as I am already using while(1) in ThreadWorker to read and write infinitely.
All comments and criticism are welcome.
Thanks & regards,
SamPrat.
You are creating threads continually and might be hitting the limit on number of threads.
Pthread_create man page says:
EAGAIN Insufficient resources to create another thread, or a system-imposed
limit on the number of threads was encountered. The latter case may
occur in two ways: the RLIMIT_NPROC soft resource limit (set via
setrlimit(2)), which limits the number of process for a real user ID,
was reached; or the kernel's system-wide limit on the number of
threads, /proc/sys/kernel/threads-max, was reached.
You should rethink of the design of your application. Creating an infinite number of threads is not a god design.
[UPDATE]
you are using lock to set an integer variable:
pthread_mutex_lock(&stop_mutex);
stopThread = true;
pthread_mutex_unlock(&stop_mutex);
However, this is not required as setting an int is atomic (on probably all architectures?). You should use a lock when you are doing not-atomic operations, eg: test and set
take_lock ();
if (a != 1)
a = 1
release_lock ();
You create a new thread each time ThreadTest is called, and never destroy these threads. So eventually you (or the OS) run out of thread handles (a limited resource).
Threads consume resources (memory & processing), and you're creating a thread each time your main loop calls ThreadTest(). And resources are finite, while your loop is not, so this will eventually throw a memory allocation error.
You should get rid of the main loop, and make ThreadTest return the newly created thread (pthread_t). Finally, make main wait for the thread termination using pthread_join.
Your pthreads are zombies and consume system resources. For Linux you can use ulimit -s to check your active upper limits -- but they are not infinite either. Use pthread_join() to let a thread finish and release the resources it consumed.
Do you know that select() is able to read from multiple (device) handles ? You can also define a user defined source to stop select(), or a timeout. With this in mind you are able to start one thread and let it sleeping if nothing occurs. If you intent to stop it, you can send a event (or timeout) to break the select() function call.
An additional design concept you have to consider is message queues to share information between your main application and/or pthread. select() is compatible with this technique so you can use one concept for data sources (devices and message queues).
Here a reference to a good pthread reading and the best pthread book available: Programming with POSIX(R) Threads, ISBN-13:978-0201633924
Looks like you've not called pthread_join() which cleans up state after non-detached threads are finished. I'd speculate that you've hit some per process resource limit here as a result.
As others have noted this is not great design though - why not re-use the thread rather than creating a new one on every loop?