Is there any way to perform POSIX shared synchronization objects cleanup especially on process crash? Locked POSIX semaphores unblock is most desired thing but automatically 'collected' queues / shared memory region would be nice too. Another thing to keep eye on is we can't in general use signal handlers because of SIGKILL which cannot be caught.
I see only one alternative: some external daemon which accepts subscriptions and 'keep-alive' requests working as watchdog so not having notifications about some object it could close / unlock object in accordance to registered policy.
Has anyone better alternative / proposition? I never worked seriously with POSIX shared objects before (sockets were enough for all my needs and are much more useful by my opinion) and I did not found any applicable article. I'd gladly use sockets here but can't because of historical reasons.
Rather than using semaphores you could use file locking to co-oridinate your processes. The big advanatge of file locks being that they are released if the process terminates. You can map each semaphore onto a lock for a byte in a shared file and know that locks will get released on exit; in mosts version of unix the bytes you lock don't even have to exist. There is code for this in Marc Rochkind's book Advanced Unix Programming 1st edition, don't know if it's in the latest 2nd edition though.
I know this question is old, but another great solution is POSIX robust mutexes. They automatically unlock and enter an "inconsistent flag" state when the owner dies, and the next thread to attempt locking the mutex gets an EOWNERDEAD error but succeeds in becoming the new owner of the mutex. It's then able to clean up whatever state the mutex was protecting (which could be in a very bad inconsistent state due to asynchronous termination of the previous owner!) and mark the mutex as consistent again before unlocking it.
See the documentation on robust mutexes here:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html
The usual way is to work with signal handlers. Just catch the signals and call the cleanup functions.
But your watchdog daemon has some merits, too. It would surely make the system more simple to understand and manage. To make it more simple to administrate, your application should start the daemon when it's not running and the daemon should be able to clean up any residue from the last crash.
Related
I need to create named lock that work correctly with multi-thread application for Linux. Each instance of application could use more than one named-lock with different names.
I know about fcntl/flock, but it doesn't work if try to lock twice from different thread of one application or from one thread.
I know about open(..., O_CREATE | O_EXCL), but this file-lock will not be removed if application was killed by signal KILL or was crashed with segmentation fault and there is needed manual removing of lock-files after restart application.
Any another ways?
If you just need to run under modern Linux, you could use file-private locks. If that's not an option, you'll have to build your own thread-safe locking abstraction on top of fcntl locks. SQLite is public domain and has implemented that, so you could look at that for inspiration. If GPLed code is okay: OpenJDK has another, incompatible implementation of the same thing.
O_EXCL does not perform locking (beyond the file creation step), so that's usually not helpful.
Other options are System V and POSIX semaphores, but these usually do not work as well as fcntl locks when processes day. A robust, process-shared mutex in a file mapping could be an option as well, but you need to be careful to stay within the POSIX semantics as far as serialization to disk is concerned (basically, you need to reinitialize the mutex every time the application starts from scratch, after a reboot or libc update).
As the title suggests, is there a way in C to detect when a user-level thread running on top of a kernel-level thread e.g., pthread has blocked (or about to block) for I/O?
My use case is as follows: I need to execute tasks in a multithreaded environment (on top of kernel threads e.g., pthreads). The tasks are basically user functions that can be synchronized and may use blocking operations within. I need to hide latency in my implementation. So, I am exploring the idea of implementing the tasks as user-level threads for better control of their execution context such that, when a task blocks or synchronizes, I context-switch to other ready tasks (i.e., implementing my own scheduler for the user-level threads). Consequently, almost the full use of the OS’s time quantum per kernel thread can be achieved.
There used to be code that did this, for example GNU pth. It's generally been abandoned because it just doesn't work very well and we have much better options now. You have two choices:
1) If you have OS help, you can use the OS mechanisms. Windows provides OS help for this, IOCP dispatching uses it.
2) If you have no OS help, then you have to convert all blocking operations into non-blocking ones that call your dispatcher rather than blocking. So, for example, if someone calls socket, you intercept that call and set the socket non-blocking. When they call read, you intercept that call and if they get a "would block" indication, you arrange to resume when the operation might succeed and schedule another thread.
You can look at GNU pth to see how you might make option 2 work. But be warned, GNU pth is full of reported bugs that have never been fixed since it was abandoned. It will give you an idea of how to implement things like mutexes and sleeps in a cooperative user-space threading environment. But don't actually use the code.
Let's consider a case where a user program calls a system call that has some synchronization measures. The simplest example would be
rwlock_t lock; // let's assume it's initialized properly.
write_lock(&lock);
// do something...
write_unlock(&lock);
Now, what happens when the user program terminates after locking lock but before releasing it is that the lock becomes perpetually locked, which we do not want it to happen. What we want the kernel to do is smartly detect any hanging locks and release them accordingly. But detecting such tasks can incur too much overhead, as the system needs to periodically record and check every task for every synchronizing action.
Or perhaps we can centralize the code into another kernel thread and do synchronization job there. But invoking on another thread still requires some form of synchronization, so I don't think it is possible to completely remove synchronizing code from the user program.
I have put a lot of thought into this and tried to google for some information but I couldn't see any light on this. Any help would be very much appreciated. Thank you.
When writing a non-blocking program (handling multiple sockets) which at a certain point needs to open files using open(2), stat(2) files or open directories using opendir(2), how can I ensure that the system calls do not block?
To me it seems that there's no other alternative than using threads or fork(2).
As Mel Nicholson replied, for everything file descriptor based you can use select/poll/epoll. For everything else you can have a proxy thread-per-item (or a thread pool) with the small stack that would convert (by means of the kernel scheduler) any synchronous blocking waits to select/poll/epoll-able asynchronous events using eventfd or a unix pipe (where portability is required).
The proxy thread shall block till the operation completes and then write to the eventfd or to the pipe to wake up the select/poll/epoll.
Indeed there is no other method.
Actually there is another kind of blocking that can't be dealt with other than by threads and that is page faults. Those may happen in program code, program data, memory allocation or data mapped from files. It's almost impossible to avoid them (actually you can lock some pages to memory, but it's privileged operation and would probably backfire by making the kernel do a poor job of memory management somewhere else). So:
You can't really weed out every last chance of blocking for a particular client, so don't bother with the likes of open and stat. The network will probably add larger delays than these functions anyway.
For optimal performance you should have enough threads so some can be scheduled if the others are blocked on page fault or similar difficult blocking point.
Also if you need to read and process or process and write data during handling a network request, it's faster to access the file using memory-mapping, but that's blocking and can't be made non-blocking. So modern network servers tend to stick with the blocking calls for most stuff and simply have enough threads to keep the CPU busy while other threads are waiting for I/O.
The fact that most modern servers are multi-core is another reason why you need multiple threads anyway.
You can use the poll( ) command to check any number of sockets for data using a single thread.
See here for linux details, or man poll for the details on your system.
open( ) and stat( ) will block in the thread they are called from in all POSIX compliant systems unless called via an asynchronous tactic (like in a fork)
I'm trying to port a code that relies on SIGCONT to stop certain threads of an application. With current linux nptl implementation seems one can't rely on that in 2.6.x kernels. I'm trying to devise a method to stop other threads. Currently I can only think on mutexes and condition variables. Any hints is appreciated.
If you are relying on stopping and resuming other threads, then your application will eventually fail.
That is because, you cannot guarantee that you're not going to stop a thread while it has a mutex taken which protects a shared resource. This would result in deadlock, as any other threads (possibly including the one which stopped the first thread) which then need to wait for the mutex, will wait forever.
I'm sure it is possible, but also, you're doing it wrong.
NB: such mutexes probably exist in parts of the C library, even if you have none in your own code. If you have none in your own code and it is nontrivial, I'd be surprised.
How are you sending the signals to the target thread? If you use pthread_kill() to send SIGSTOP / SIGCONT to a single thread, it should work.