So I basically have code that is basically
pthread_t cpu[10];
while (a certain condition){
for (int i=0;i<10;i++){
pthread_create(&cpu[i],NULL,(a function), NULL);
}
this code should be running about 10 threads at a time however, after running the while loop a certain amount of times it says their is a pthread error with code 11. I know I am running the threads multiple times however, shouldn't only 10 instances be happening?
The limit on threads is being reached because the program is calling pthread_create() in a loop, constantly spawning more threads, without ever calling pthread_join() to clean up the resources of the existing threads. This quickly fills up the process's threads-table, at which point pthread_create() starts to error out because there is no more room in the process's threads-table for any more threads.
To avoid that problem, you need to modify the code so that it only summons a finite (and reasonable -- read: dozens, not hundreds or thousands) number of threads into existence at one time.
A simple calling-pattern to achieve that would look something like this:
pthread_t cpu[10];
while (a certain condition){
// spawn 10 threads
for (int i=0;i<10;i++){
pthread_create(&cpu[i],NULL,(a function), NULL);
}
// at this point all 10 threads are running
// wait until all 10 threads have exited
for (int i=0;i<10;i++){
pthread_join(&cpu[i], NULL);
}
}
Another common (and somewhat more elegant) approach would be to use a thread-pool instead of spawning and joining threads. That's often preferable because it avoids the overhead required to constantly create and then tear down threads, and because it means that as soon as a thread has finished computing job A, it can immediately grab job B out of the pending-jobs-queue and start working on it -- unlike the code shown above, which has to wait for all 10 threads to complete before it can spawn 10 more.
#include <string.h>
int main() {
printf("%s\n", strerror(11));
return 0;
}
Gives on my system: Resource temporarily unavailable
Again on my system: EAGAIN is defined as 11 (errno.h).
man pthread_create states as possible error return codes:
EAGAIN
Insufficient resources to create another thread.
EAGAIN
A system-imposed limit on the number of threads was encountered. There are a number of limits that may trigger this error: ...
The reason for that was already answered by Jeremy Friesner in the comments section above.
Related
I am trying to write a program that will continuously take reading from a sensor that will monitor water level. Then after every (10-15 mins) it will need to take soil moisture readings from other sensors. I have never used POSIX pthreads before. This is what I have so far as a concept of how it may work.
It seems to be working the way I want it to, but is it a correct way to implement this. Is there anything else I need to do?
void *soilMoisture(void *vargp)
{
sleep(10);
funct();
return NULL;
}
int main()
{
pthread_t pt;
int k=1;
pthread_create(&pt, NULL, soilMoisture, NULL);
while(k>0)
{
printf("This is the main thread (Measuring the water level) : %d\n", k);
sleep(1);
}
return 0;
}
void funct()
{
printf("******(Measuring soil moisture after sleeping for 10SEC)***********\n");
pthread_t ptk;
pthread_create(&ptk, NULL, soilMoisture, NULL);
}
It is not clear why you create a new thread every 10 seconds rather than just letting the original continue. Since the original thread exits, you aren't directly accumulating threads, but you aren't waiting for any of them, so there are some resources unreleased. You also aren't error checking, so you won't know when anything does go wrong; monitoring will simply stop.
You will eventually run out of space, one way or another. You have a couple of options.
Don't create a new thread every 10 seconds. Leave the thread running by making a loop in the soilMoisture() function and do away with funct() — or at least the pthread_create() call in it.
If you must create new threads, make them detached. You'll need to create a non-default pthread_attr_t using the functions outlined and linked to in When pthread_attr_t is not NULL.
There are a myriad issues you've not yet dealt with, notably synchronization between the two threads. If you don't have any such synchronization, you'd be better off with two separate programs — the Unix mantra of "each program does one job but does it well" still applies. You'd have one program to do the soil moisture reading, and the other to do the water level reading. You'll need to decide whether data is stored in a database or otherwise logged, and for how log such data is kept. You'll need to think about rotating logs. What should happen if sensors go off-line? How can you restart threads or processes? How can you detect when threads or processes lock up unexpectedly or exit unexpectedly? Etc.
I assume the discrepancy between 10-15 minutes mentioned in the question and 10 seconds in the code is strictly for practical testing rather than a misunderstanding of the POSIX sleep() function.
I am writing a multi-thread program, where one thread executes a lot of system calls (like read, write), and other thread executes normal calls like printf.
Suppose thread A is for normal calls, and thread B is for system calls, my main function is like
int main()
{
pthread_t thread_A;
pthread_t thread_B;
pthread_create(&thread_B,NULL,&system_call_func,NULL);
pthread_create(&thread_A,NULL,&printf_func,NULL);
pthread_join(thread_B,NULL);
pthread_join(thread_A,NULL);
printf("Last thread to be executed was %c\n",write_last);
return 0;
}
By this, I found that the thread with system calls is executed last always. Even if I change the order of thread creation and joining, it is still thread B.
I have two questions, does the order of thread creation/joining matters? and is it because of the system calls that thread B is always executing last?
You're just measuring which thread finishes first, not which one runs first. Assuming they both run in parallel and start at roughly the same time, the one that spends less time working is going to finish first.
If you want to observe the sequence of operations in both, run the program under strace -f, but be aware that the overhead of tracing slows things down a lot and tends to eliminate parallelism in the traced program except when it's doing purely computational tasks with no system calls.
I am making a multi-threaded C program which involves the sharing of a global dynamic integer array between two threads. One thread will keep adding elements to it & the other will independently scan the array & free the scanned elements.
can any one suggest me the way how can I do that because what I am doing is creating deadlock
Please also can any one provide the code for it or a way to resolve this deadlock with full explanation
For the threads I would use pthread. Compile it with -pthread.
#include <pthread.h>
int *array;
// return and argument should be `void *` for pthread
void *addfunction(void *p) {
// add to array
}
// same with this thread
void *scanfunction(void *p) {
// scan in array
}
int main(void) {
// pthread_t variable needed for pthread
pthread_t addfunction_t, scanfunction_t; // names are not important but use the same for pthread_create() and pthread_join()
// start the threads
pthread_create(&addfunction_t, NULL, addfunction, NULL); // the third argument is the function you want to call in this case addfunction()
pthread_create(&scanfunction_t, NULL, scanfunction, NULL); // same for scanfunction()
// wait until the threads are finish leave out to continue while threads are running
pthread_join(addfunction_t, NULL);
pthread_join(scanfunction_t, NULL);
// code after pthread_join will executed if threads aren't running anymore
}
Here is a good example/tutorial for pthread: *klick*
In cases like this, you need to look at the frequency and loading generated by each operation on the array. For instance, if the array is being scanned continually, but only added to once an hour, its worth while finding a really slow, latency-ridden write mechanism that eliminates the need for read locks. Locking up every access with a mutex would be very unsatisfactory in such a case.
Without details of the 'scan' operation, especially duration and frequency, it's not possible to suggest a thread communication strategy for good performance.
Anohter thing ee don't know are consequences of failure - it may not matter if a new addition is queued up for a while before actually being inserted, or it may.
If you want a 'Computer Science 101' answer with, quite possibly, very poor performance, lock up every access to the array with a mutex.
http://www.liblfds.org
Release 6 contains a lock-free queue.
Compiles out of the box for Windows and Linux.
HI,
I have a program in which a master processes spawns N workers who will invert, each one, each row of an image, giving me an inverted image at the end. The program uses shared memory and posix semaphores, unnamed sems, more spefically and I use shmctl with IPC_RMID and sem_close and sem_destroy in the terminate() function.
However, when I run the program several times, sometimes it gives me a segmentation fault and is in the first shmget. I've already modified my shmmax value in the kernel, but I can't do the same to the shmall value, I don't know why.
Can someone please help me? Why this happens and why isn't it all the time? The code seems fine, gives me what I want, efficient and so...but sometimes I have to reboot Ubuntu to be able to run it again, even thought I'me freeing the resources.
Please enlighten me!
EDIT:
Here are the 3 files needed to run the code + the makefile:
http://pastebin.com/JqTkEkPv
http://pastebin.com/v7fQXyjs
http://pastebin.com/NbYFAGYq
http://pastebin.com/mbPg1QJm
You have to run it like this ./invert someimage.ppm outimage.ppm
(test with a small one for now please)
Here are some values that may be important:
$ipcs -lm
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 262144
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1
$ipcs -ls
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
EDIT: the seg fault was solved! I was allocating an **array in shared memory and that was a little bit odd.So, I've allocated segment for an *array only and voilà. If you want, check the new code and comment.
If all your sem_t POSIX semaphores are unnamed you should only use sem_init and sem_destroy on them and never sem_close.
Now that you posted your code we can say a bit more.
Without having read all in detail, I think the cleanup phase of your main looks suspicious. In fact it seems to me that all your worker processes will perform that cleanup phase too.
After the fork you should more clearly distinguish what main does and what the workers do. Alternatives:
Your main process could just
wait on the pids of the workers
and only then do the rest of
processing and cleanup.
All the worker processes could
return in main after the call to
worker.
Call exit at the end of the worker
function.
Edit after your code update:
I think still a better solution would be to do a classical wait for all the processes.
Now let's look into your worker process. In fact these never terminate, there is no break statement in the while (1) loop. I think what is happening is that once there is no more work to be done
the worker is stuck in
sem_wait(sem_remaining_lines)
your main process gets notified of
the termination
it destroys the sem_remaining_lines
the worker returns from sem_wait
and continues
since mutex3 is also already
destroyed (or maybe even unmapped) the wait on it returns
immediately
now it tries to access the data, and
depending on how far the main
process got on destruction the data
is mapped or not and the worker
crashes (or not)
As you can see you have many problems in there. What I would do to clean up this mess is
waitpid before destroy the shared
data
sem_trywait instead of the 1 in
while (1). But perhaps I didn't completely understand your control flow. In any case, give them a termination condition.
capture all returns from system
functions, in particular the sem_t
family. These can be interrupted by
IO, so you definitively must
check for EINTR on these.
I'm creating n threads & then starting then execution after a barrier breakdown.
In global data space:
int bkdown = 0;
In main():
pthread_barrier_init(&bar,NULL,n);
for(i=0;i<n;i++)
{
pthread_create(&threadIdArray[i],NULL,runner,NULL);
if(i==n-2)printf("breakdown imminent!\n");
if(i==n-1)printf("breakdown already occurred!\n");
}
In thread runner function:
void *runner(void *param)
{
pthread_barrier_wait(&bar);
if(bkdown==0){bkdown=1;printf("barrier broken down!\n");}
...
pthread_exit(NULL);
}
Expected order:
breakdown imminent!
barrier broken down!
breakdown already occurred!
Actual order: (tested repeatedly)
breakdown imminent!
breakdown already occurred!
barrier broken down!!
Could someone explain why the I am not getting the "broken down" message before the "already occurred" message?
The order in which threads are run is dependent on the operating system. Just because you start a thread doesn't mean the OS is going to run it immediately.
If you really want to control the order in which threads are executed, you have to put some kind of synchronization in there (with mutexes or condition variables.)
for(i=0;i<n;i++)
{
pthread_create(&threadIdArray[i],NULL,runner,NULL);
if(i==n-2)printf("breakdown imminent!\n");
if(i==n-1)printf("breakdown already occurred!\n");
}
Nothing stops this loop from executing until i == n-1 . pthread_create() just fires off a thread to be run. It doesn't wait for it to start or end. Thus you're at the mercy of the scheduler, which might decide to continue executing your loop, or switch to one of the newly created threads (or do both, on a SMP system).
You're also initalizing the barrier to n, so in any case none of the threads will get past the barrier until you've created all of them.
In addition to the answers of nos and Starkey you have to take into account that you have another serialization in your code that is often neglected: you are doing IO on the same FILE variable, namely stdin.
The access to that variable is mutexed internally and the order in which your n+1 threads (including your calling thread) get access to that mutex is implementation defined, take it basically as random in your case.
So the order in which you get your printf output is the order in which your threads pass through these wormholes.
You can get the expected order in one of two ways
Create each thread with a higher priority than the main thread. This will ensure that new thread will run immediately after creation and wait on the barrier.
Move the "breakdown imminent!\n" print before the pthread_create() and call use a sched_yield() call after every pthread_create(). This will schedule the newly created thread for execution.