This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Interprocess semaphores sometimes not working as expected
In my application, I notice that a semaphore of type sem_t sometimes become 1 from 0 without executing sem_post. How come? What can cause this? The semaphore is used for inter-process communication and you can look at the code here.
The code that you are linking to doesn't capture the return values from the sem_t calls. If you look in the manual you can see that e.g sem_wait can return prematurely, so-called spurious wakeups.
Always check the return codes of these functions. If the return is -1 check errno for the corresponding error and decide if it is a transient error. If so, iterate.
Related
This question already has answers here:
is it necessary to call pthread_mutex_destroy on a mutex?
(3 answers)
Closed 2 years ago.
Imagine I've got a pthread mutex somewhere on the heap.
pthread_mutex_t *mutex;
Should I always destroy like this before freeing the memory?
pthread_mutex_destroy(mutex);
free(mutex);
Or should I simply invoke free() without concerning myself with the destroy? From its man page it looks like it switches the mutex internal state back to uninitialized, so is it really necessary when I'm going to free the memory anyway?
The thing about pthread_mutex_destroy() is that it returns an error code, which is very useful to confirm the state of the mutex in question. According to the man page:
The pthread_mutex_destroy() function may fail if:
EBUSY
The implementation has detected an attempt to destroy the object referenced by mutex while it is locked or referenced (for example,
while being used in a pthread_cond_timedwait() or pthread_cond_wait())
by another thread.
EINVAL
The value specified by mutex is invalid.
So, if you are already absolutely certain that the mutex is suitable for release, then you can just call free(mutex). If you are just assuming that it is suitable, then strangeness is likely to ensue.
This question already has answers here:
POSIX API call to list all the pthreads running in a process
(3 answers)
Closed 9 years ago.
I want to write a c function , which when called by the process returns number of threads created by that process.
I want to get the value not by counting but from the kernel structure ?
Which structure has this information ?
You can get a lot of information about your process by looking in /proc/$$ where $$ is your process ID.The number of threads is available atomically through /proc/$$/status.
My solution: You need write a function to analyse the file /proc/$$/status to get the number of threads.
This question already has answers here:
Conditional Variable vs Semaphore
(8 answers)
Closed 8 years ago.
Why should we use wait() and signal() operation in multithreading applications?
I'm relatively new to multithreading and somewhat understand mutual exclusion but I need a better understanding of how wait() and signal() come into the equation.
It seems I'm achieving thread safety by only using lock() and unlock(). Am I wrong?
Can someone give me an example of wait/signal being used and wait and signal not being used with lock/unlock? What are the benefits to using wait/signal over just lock/unlock?
Thanks.
I work with computational maths/science so my examples come from there.
If you were doing a reduce operation such as a dot product (need to sum many calculations) then lock and unlock are useful as the order of the sum does not matter and if it's free the thread should go for it.
If you were solving a PDE over time before you can take the next time step the previous time step needs to be completed, a lock/unlock wouldn't work as even if the data is free for modification the prerequisite calculations may not have been done, this is where you would use a wait/signal.
Cramer your answer gave me good hints but the answer on this page was exactly the explanation I needed.
Conditional Variable vs Semaphore
This question already has answers here:
Are there any standard exit status codes in Linux?
(11 answers)
Closed 2 years ago.
What error code does a process that segfaults return? From my experiments, it seems to be "139", but I'd like to find why this is so and how standard it is.
When a process is terminated, the shell only stores an 8-bit return code, but sets the high bit if the process was abnormally terminated. But because your process is terminated by a segmentation fault, usually the signal that is sent is SIGSEGV(Invalid memory reference) which has a value of 11.
So because your process was terminated abnormally, you have a 128 and then you add the value of the signal that terminated the process which was 11, you get 139.
The relevant syscall (giving the status of a terminated process) is waitpid(2). The 139 is for WIFSIGNALED and WTERMSIG etc... On Linux the actual bits are described in internal file /usr/include/bits/waitstatus.h which is included from <sys/wait.h> header
The wait, waitpid call is standard in POSIX and so are the macro names (like WTERMSIG etc...). The actual implementation of these macros, and the actual signal numbers, hence the code given by the shell, are implementation specific.
The signal(7) Linux man page gives the number of the signals.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is writing a closed TCP socket worse than reading one?
Why doesn't an erroneous return value suffice?
What can I do in a signal handler that I can't do by testing the return value for EPIPE?
Back in the old days almost every signal caused a Unix program to terminate. Because inter-process communication by pipes is fundamental in Unix, SIGPIPE was intended to terminate programs which didn't handle write(2)/read(2) errors.
Suppose you have two processes communicating through a pipe. If one of them dies, one of the ends of the pipe isn't active anymore. SIGPIPE is intended to kill the other process as well.
As an example, consider:
cat myfile | grep find_something
If cat is killed in the middle of reading the file, grep simply doesn't have what to do anymore and is killed by a SIGPIPE signal. If no signal was sent and grep didn't check the return value of read, grep would misbehave in some way.
As with many other things, my guess is that it was just a design choice someone made that eventually made it into the POSIX standards and has remained till date. That someone may have thought that trying to send data over a closed socket is a Bad Thing™ and that your program needs to be notified immediately, and since nobody ever checks error codes, what better way to notify you than to send a signal?