Named semaphore or flock which is better C linux - c

I am trying to create a shared memory which will be used by multiple processes. these processes communicate with each other using MPI calls (MPI_Send, MPI_Recv).
I need a mechanism to control the access of this shared memory I added a question yesterday to see if MPI provides any facility to do that. Shared memory access control mechanism for processes created by MPI , but it seems that there is no such provision by MPI.
So I have to choose between named semaphore or flock.
For named semaphore if any of the process dies abruptly without calling sem_cloe(), than that semaphore always remains and can be seen by ll /dev/shm/. This results in deadlock sometimes(if I run the same code again!), for this reason I am currently thinking of using flock.
Just wanted to confirm if flock is best suited for this type of operation ?
Are there any disadvantages of using flock?
Is there anything else apart from named semaphore and flock that can be used here ?
I am working on C under linux.

You can also use a POSIX mutex in shared memory; you just have to set the "pshared" attribute on it first. See pthread_mutexattr_setpshared. This is arguably the most direct way to do what you want.
That said, you can also call sem_unlink on your named semaphore while you are still using it. This will remove it from the file system, but the underlying semaphore object will continue to exist until the last process calls sem_close on it (which happens automatically if the process exits or crashes).
I can think of two minor disadvantages to using flock. First, it is not POSIX, so it makes your code somewhat less portable, although I believe most Unixes implement it in practice. Second, it is implemented as a system call, so it will be slower. Both pthread_mutex_lock and sem_wait use the "futex" mechanism on Linux, which only does a system call when you actually have to wait. This is only a concern if you are grabbing and releasing the lock a lot.

Related

POSIX name semaphore does not release after process exits

I am trying to use POSIX named semaphore for cross-process synchronization. I noticed that after the process died or exit, the semaphore is still open by the system.
Is there anyway to make it closed/released after the process (which open it) die or exit?
An earlier discussion is here: How do I recover a semaphore when the process that decremented it to zero crashes?. They discussed several possible solutions there.
In short:
No. POSIX semaphores are not released if the owning process crashes or is killed by signals. The waiting process will have to wait forever. You can't work around this as long as you stick with semaphores.
You can use sockets or file locks to implement the inter-process synchronization, which can be released automatically when the process exits. The question owner I posted above eventually chose the file locks. See his answer. In the comment area, he posted a link to his blog that discusses this issue.
Other links that might help:
Why is sem_wait() not undone when my program crashes?: It also recommends file locks.
Is it possible to use mutex in multiprocessing case on Linux/UNIX ?: They discuss the use of mutex by sharing memory between processes for synchronization.
You seem to be having a conceptual problem with inter-process communication. An IPC mechanism's lifetime cannot be tied directly to the life cycle of any one process because then it could disappear out from under other processes accessing it. It is intentional that named semaphores persist until explicitly removed.
The Linux sem_overview(7) manual page, though not an authoritative specification, gives a run-down of semaphore life cycle management:
The sem_open(3) function creates a new named semaphore or opens an existing named semaphore. After the semaphore has been opened, it can be operated on using sem_post(3) and sem_wait(3). When a process has finished using the semaphore, it can use sem_close(3) to close the semaphore. When all processes have finished using the semaphore, it can be removed from the system using sem_unlink(3).
As the documentation for sem_unlink() makes clear, you can unlink a semaphore while processes still have it open. No processes can thereafter sem_open() that semaphore, and ultimately it will be cleaned up when the number of processes that have it open falls to zero. This is intentionally analogous to regular files.
If indeed there is one process that should be responsible for cleaning up a given named semaphore, then you should be sure that it sem_unlink()s it. Two reasonably good alternatives are to unlink it as soon as you are satisfied that all other processes that need it have opened it, or to register an exit handler that handles the unlinking. If viable, the former is probably better.

Semaphores in C Linux Programming

I'm taking over some C code running in Linux (Centos) with extensive use of semaphores.
The way the code is written :
./Program1
This program launches a bunch of processes which makes use of mutexes and semaphores.
./Program2
This program also launches a bunch of processes which makes use of mutexes and semaphores.
I've realised that Program1 and Program2, they make use of semaphores with the same names.
In Linux C programming, can different programs use the same semaphores?
My guess is no, but the same naming is confusing the hell out of me. They are using the same source code to launch and handle the semaphores.
The semaphores are invoked using the following commands:
semget
semctl
semop
I've read that these are called processes semaphores.. if Program1 creates SEMAPHORE1, can Program2 access SEMAPHORE1?
Appreciate any help here, thanks!
Assuming you mean named semaphores (or even unnamed semaphores stored in shared memory, both which can be created with sem_open), they generally are shared amongst processes.
Semaphores using semget and related calls use an ID key rather than a name but their usage patterns are similar.
Semaphores are one of the IPC (inter-process communication) methods.
You can create a one-process-only semaphore by using an unnamed variant in non-shared memory and this will only be accessible to threads of the given process but, in my experience, that's not a common use case. The semget family of calls can also give you process-private semaphores.
Mutexes, on the other hand, tend to be more used within a single process for inter-thread communication but there is even a variant of them that can work inter-process.
You create a pthread_mutexattr (attribute) which allows sharing of mutexes and then use that attribute when initialising the mutex you want to share. Obviously, the mutex needs to be in shared memory so that multiple processes can get at it.

semaphores in C

I'm working with semaphores in C , especifically to control the access to a shared memory zone in linux. but there is one thing that I can't understand.
I am using a mutex to control the access to a specific zone because i have 2 processes that must read/write from that zone. the thing is, when we use the fork() to create a new child process, the whole program is "copied" to another program as if they were two seperate programs right ? so, when i do V(mutex) in one process, how does the other one know he can't access ?
I know its a noob question but nobody could explain this to me until now.
After the fork neither process is going to know about the memory actions of the other because they are separate copies. You have to put your shared variables in shared memory, including mutexes and semaphores. Then all the processes are operating on the same resource.
For unrelated (i.e. non-forked) process there are usually system facilities (e.g. named semaphores) that each process can open based on a path name or similar method that each can use to find and use the resource.
You synchronisation objects must be placed in process shared memory, for example created with mmap (... MAP_ANONYMOUS ...). In addition, they must have the PTHREAD_PROCESS_SHARED attribute set, for example, by using pthread_mutexattr_setpshared.
See here:
Semaphores and Mutex for Thread and Process Synchronization
So mutex in practice is often used in threads, which makes sharing trivial. For processes however, mutex could be stored as a part of the shared mem.
For semaphores however, linux has built in library, which identifies global semaphores by keys. See below.
http://beej.us/guide/bgipc/output/html/multipage/semaphores.html
Or you can use other IPC to sync. Signals, for example.
Hope this helps.

pthread_mutex_init vs sem_init (Unshared)

I am looking at changing some code that I would like to run on linux, unix, and OSX. There are some calls in the code for a sem_init, but the pshared value is set to zero. I did some reading in the Rochkind book on unix programming and he basically said that sem_init that is not shared is the same as a pthread_mutex_init because it's acting in an in-memory, binary fashion.
The question is - am I safe to change these sem_init's to pthread_mutex_init, or use sem_open to get a more portable version of this code?
OSX does not support unnamed semaphores, but I guess the other two do. I don't really want to have a separate compile flag to #ifdef(__APPLE__) or something either.
Thanks
mutexes and semaphore have different semantics. A mutex must be unlocked by the same thread that has taken the lock. So lock / unlock must always come in pairs in the same thread.
A semaphore is much more flexible in that another thread can post a token that another thread consumes. They are e.g commonly used to implement producer / consumer patterns. So you'd have to check the program that you want to port if it fits to the restricted semantic of mutexes.
The semantics of mutexes and semaphores are different. It is true that a non-shared semaphore is equivalent to a mutex if it is only used as a binary semaphore, i.e. if its value is never greater than 1. However, this is something you need to determine from your code's logic not how it is initialized. If you are sure that the semaphore is only used as a binary semaphore then a pthread mutex is a perfect replacement. If not you can either use sem_open() for portability or write a wrapper that emulates semaphores using pthread mutexes and condition variables.
Switching to mutexes should be safe in the given instance. If only one thread can enter the given critical section at a time, you effectively have a mutex whether it's written as a semaphore or not. However, depending on how the functions are implemented by the OS, you may get different performance characteristics. It's not something I would lose sleep over, but still something to keep in the back of your mind while testing.
I prefer to use mutex and condition_variable.
Because in my past work, I have encountered problems caused by incorrect use of semaphores, and these problems are extremely difficult to locate.
However, it's hard to use sem_init and sem_post in absolutely correct way.
Like:
// Thread a
sem_init(&sem);
// Thread b
sem_wait(&sem);
// Kernel: Linux 3.10
If Thread a starts before Thread b, Thread b may block on sem_wait forever.
It is hard to assume the start sequence of multi-threads, and thread a may restart when it crash. \
But if you call pthread_mutex_init repeatedly, the function will return EBUSY
https://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_mutex_init.html

POSIX API call to list all the pthreads running in a process

I have a multi-threaded application in a POSIX/Linux environment - I have no control over the code that creates the pthreads. At some point the process - owner of the pthreads - receives a signal.
The handler of that signal should abort,cancel or stop all the pthreads and log how many pthreads where running.
My problem is that I could not find how to list all the pthreads running in process.
There doesn't seem to be any portable way to enumerate the threads in a process.
Linux has pthread_kill_other_threads_np, which looks like a leftover from the original purely-userland pthreads implementation that may or may not work as documented today. It doesn't tell you how many threads there were.
You can get a lot of information about your process by looking in /proc/self (or, for other processes, /proc/123). Although many unices have a file or directory with that name, the layout is completely different, so any code using /proc will be Linux-specific. The documentation of /proc is in Documentation/filesystems/proc.txt in the kernel source. In particular, /proc/self/task has a subdirectory for each thread. The name of the subdirectory is the LWP id; unfortunately, [1][2][3] there doesn't seem to be a way to associate LWP ids with pthread ids (but you can get your own thread id with gettid(2) if you work for it). Of course, reading /proc/self/task is not atomic; the number of threads is available atomically through /proc/self/status (but of course it might change before you act on it).
If you can't achieve what you want with the limited support you get from Linux pthreads, another tactic is to play dynamic linking tricks to provide your own version of pthread_create that logs to a data structure you can inspect afterwards.
You could wrap ps -eLF (or another command that more closely reads just the process you're interested in) and read the NLWP column to find out how many threads are running.
Given that the threads are in your process, they should be under your control. You can record all of them in a data structure and keep track.
However, doing this won't be race-condition free unless it's appropriately managed (or you only ever create and join threads from one thread).
Any threads created by libraries you use are their business and you should not be messing with them directory, or the library may break.
If you are planning to exit the process of course, you can just leave the threads running anyway, as calling exit() terminates them all.
Remember that a robust application should be crash-safe anyway, so you should not depend upon shutdown behaviour to avoid data loss etc.

Resources