Correctly processing Ctrl-C when using poll() - c

I am making a program, that runs like a server, so it is constantly running poll. I need to process both Ctrl-C and Ctrl-D. And while Ctrl-D is pretty easy to work with when using poll (you just also poll for POLLIN on stdin), I cannot come up with a pretty solution for signals. Do I need to create a dummy file to which my signal handler will write something when it's time to exit, or would pipes fit this purpose nicely?

As commented by Dietrich Epp, a usual way of handling this is the "pipe to self" trick. First, at initialization time, you set up a pipe(7): you'll call pipe(2) and you keep both read and write file descriptors of that pipe in some (e.g. global) data. Your signal handler would just write(2) onto the write-end fd some bytes (perhaps a single 0 byte ...). And your event loop around poll(2) (or the older select(2), etc...) would react by read(2)-ing bytes when the read-end file descriptor has some data.
This pipe to self trick is common and portable to all POSIX systems, and recommended e.g. by Qt.
The signalfd(2) system call is Linux specific (e.g. you don't have that on MacOSX). Some old Linux kernels might not have it.
Be aware that the set of functions usable inside a signal handler is limited to async-signal-safe functions - so you are allowed to use write(2) but forbidden to use fprintf or malloc inside a signal handler. Read carefully signal(7) and signal-safety(7).

signalfd is what you are after - connect it to SIG_INT and you can poll for ctrl+c – see the example in the link provided (quite down the page – actually, they are catching ctrl+c there...).

Related

Equivalent of select or poll for pipes on Windows

Some Unix code I am working on depends on being able to poll over a small number of pipes. poll is a POSIX system call that (much like the older select) allows the process to wait until one or more file descriptors is "ready" for reading or writing, which means one can proceed to do so without blocking. This is useful to implement event loops where waiting is clearly separated from the rest of the communication.
Is it possible to do the same for Windows pipe handles - wait for one or more of them to become "ready" for reading/writing?
Existing SO advice on the matter, such as answers to this question, recommend the use of completion ports. However as far as I can tell, completion ports require initiating reading/writing beforehand, and then waiting for (or being notified of) the completion of those operations. This approach does not fit the architecture of the code, which strongly separates the polling code from the reading/writing code, the latter calling into a library that uses the regular ReadFile and WriteFile on the underlying handle.
If there is no direct equivalent to poll, could one abuse completion ports to provide something similar? In other words, is it possible to create IO completion events that announce "you can now call ReadFile (WriteFile) on this handle without it blocking" and wait for them using WaitForMultipleObjects or GetQueuedCompletionStatus?

Forcefully remove fcntl locks from a different process

Is there any way I can remove fcntl byte range locks on a file from a process that did not lock these ranges?
I have several processes that put byte range locks on files. What I basically need to come up with is an external tool that would help me remove byte range locks for files I specify.
There are two options that immediately come to mind.
Write a kernel module to do this.
As far as I know, there is no kernel facility to do this as of right now.
(You could add a new command to fcntl(), that given superuser privileges or same user as the owner of the lock, does the force-unlock or lock stealing.)
Write a small library, that installs a realtime signal handler, say SIGRTMAX. When this signal is caught, sent by sigqueue(), and the int payload describes an open file descriptor, release all byte locks on that descriptor.
Alternatively, you can have the signal handler open and read a file or pipe (say /tmp/PID.lock, where the file or pipe contains a data packet defining which file or file descriptor and byte range to unlock.
As long as the library is loaded when the process starts (and possibly interposing all signal() and sigaction() calls to make sure your signal is kept in the call chain), this should work fine.
The second option requires that you preload the library (via LD_PRELOAD environment variable, or preloading it for all binaries using /etc/ld.so.conf).
The interposing library is not difficult at all to write. I have shown an example of using an interposing library to monitor fork() calls. In your case, you'd have to think of a good way to define the byte ranges to be unlocked (in file or pipe, triggered by a signal), and handle all that in the signal handler context; but there are enough async-signal-safe low-level unistd.h I/O functions to do this.

Reading shared data inside a signal handler

I am in a situation where I need to read a binary search tree (BST) inside a signal handler (SIGSEGV signal handler, which according to my knowledge is per thread base). The BST can be modified by the other threads in the application.
Now since a signal handler can't use semaphores, mutexes etc. and therefore can't access shared data, How do I solve this problem? Note that my application is multithreaded and running on a multicore system.
You shouldn't access shared data from signal handler. You can find out more information about signals in following articles:
Linux Signals for the Application Programmer
The Linux Signals Handling Model
All about Linux signals
Looks like the safest way to deal with signals in linux so far is signalfd.
I can see two quite clean solutions:
Linux-specific: Create a dedicated thread handling signals. Catch signals using signalfd(). This way you will handle signals in a regular thread, not any limited handler.
Portable: Also use a dedicated thread that sleeps until signal is received. You may use a pipe to create a pair of file descriptors. The thread may read(2) from the first descriptor and in a signal handler you may write(2) to the second descriptor. Using write() in a signal handler is legal according to POSIX. When the thread reads something from the pipe it knows it must perform some action.
Assuming the SH can't access the shared data directly, then maybe you could do it indirectly:
Have some global variable that only signal handlers can write to, but can be read from elsewhere (even if only within the same thread).
SH sets the flag when it is invoked
Threads poll this flag when they are not in the middle of modifying the BST; when the find it set, they do the processing that is required by the original signal (using whatever synchronizations are necessary), and then raise a different signal (like SIGUSR1) to indicate that the processing is done
The SH for THAT signal resets the flag
If you're worried about overlapping SIGSEGVs, add a counter to the mix to keep track. (Hey! You just built your own semaphore!)
The weak link here is obviously the polling, but its a start.
You might consider mmap-ing a fuse file system (in user space).
Actually, you'll be more happy on Gnu Hurd which has support for external pagers
And perhaps your hack of reading a binary search tree in your signal handler could often work in practice, non-portably and in a kernel version dependent way. Perhaps serializing access with low-level non portable tricks (e.g. futexes and atomic gcc builtins) might work. Reading the (machine specific) source code of NPTL i.e. current Linux pthread routines should help.
It could probably be the case that pthread_mutex_lock etc are in fact usable from inside a Linux signal handler... (because it probably does only futex and atomic instructions).

Triggering Signal Handler For I/O

Using C on Linux, how would I go about triggering a signal handler every time I write data to a buffer using the write() function. The handler will be reading all data written to the buffer at the time of execution.
Sockets support this by enabling async mode on the socket file descriptor. On Linux this is done using fcntl calls:
/* set socket owner (the process that will receive signals) */
fcntl(fd, F_SETOWN, getpid());
/* optional if you want to receive a real-time signal instead of SIGIO */
fnctl(fd, F_SETSIG, signum);
/* turn on async mode -- this is the important part which enables signal delivery */
fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) | O_ASYNC);
Use pipe() with O_ASYNC and you'll recieve a SIGIO on the read end of the pipe whenever there's new data on the pipe.
I don't 100% understand what you are trying to do, BUT
select might be what you need. waiting for data to be written to a file/pipe. You can use it to do/simulate asynchronous I/O.
If the file descriptor being used with write() is not for a FIFO, pipe (as suggested by Ken Bloom), asynchronous socket (as suggested by mark4o), and does not otherwise cause a signal (i.e. SIGIO), I suppose you could use raise() to send a signal to the current process after writing data to the buffer. Depending on what you are actually trying to achieve, this may not be the best solution.
Update
If I understand you correctly, you want to write to a file, have a SIGIO signal generated on completion of the write, and then read the data back from within the signal handler. It seems you want to use asynchronous I/O for a file.
In Asynchronous I/O on linux or: Welcome to hell, the author describes various asynchronous I/O techniques on Linux, including using the SIGIO signal. The SIGIO signal technique cannot be used with regular files.
Even though the author of the previously mentioned article doesn't think highly of the POSIX AIO API provided in the 2.6 kernel, you may want to look into it anyway as it can be used to provide notification of asychronous read/write completion to a regular file through signals and function callbacks.
In Boost application performance using asynchronous I/O, the author provides an overview of basic Linux I/O models before introducing the AIO API.

Strategy flushing file outputs at termination

I have an application that monitors a high-speed communication link and writes logs to a file (via standard C file IO). The response time to messages that arrive on the link is important, so I knowingly don't fflush the file at each message, because this slows down my response time.
However, in some circumstances my application is terminated "violently" (e.g. by killing the process), and in these cases the last few log messages are not written (even if the communication link has been quiet for some time).
What techniques/strategies can I use to make sure most of my data is flushed, but without giving up speed of response?
Edit: The application runs on Windows
Using a thread is the standard solution to this. Have your data collection code write data to a thread-safe queue and use a semaphore to signal the writing thread.
However, before you go there, double-check your assertion that fflush() would be slow. Most operating systems have a file system cache. It makes writes very fast, as simple memory-to-memory block copy. The data gets written to disk lazily, your crash won't affect it.
If you are on Unix or Linux, your process would receive some termination signal which you can catch (except SIGKILL) and fflush() in your signal handler.
For signal catching see man sigaction.
EDIT: No idea about Windows.
I would suggest an asynchronous write-though. That way you don't need to wait for the write IOP to happen, nor will the OS will delay the IOP. See CreateFile() flags FILE_FLAG_WRITE_THROUGH | FILE_FLAG_OVERLAPPED.
You don't need FILE_FLAG_NO_BUFFERING. That's only to skip the OS cache. You would only need it if you are worried about the entire OS dying violently.
If your program terminates by calling exit() or returning from main(), the C standard guarantees that open streams are flushed and closed, so no special handling is needed. It sounds from your description like this is what is happening: if your program died due to a signal, you wouldn't see the flush.
I'm having trouble understanding what the problem is exactly.
If it's just that you're trying to find a happy medium between flushing often and the default fully buffered output, then maybe line buffering is what you want:
setvbuf(stream, 0, _IOLBF, 0);

Resources