How to use a file as a mutex in Linux and C? - c

I have different processes concurrently accessing a named pipe in Linux and I want to make this access mutually exclusive.
I know is possible to achieve that using a mutex placed in a shared memory area, but being this a sort of homework assignment I have some restrictions.
Thus, what I thought about is to use locking primitives on files to achieve mutual exclusion; I made some try but I can't make it work.
This is what i tried:
flock(lock_file, LOCK_EX)
// critic section
flock(lock_file, LOCK_UN)
Different projects will use different file descriptors but referring to the same file.
Is it possible to achieve something like that? Can you provide some example.

The standard lock-file technique uses options such as O_EXCL on the open() call to try and create the file. You store the PID of the process using the lock, so you can determine whether the process still exists (using kill() to test). You have to worry about concurrency - a lot.
Steps:
Determine name of lock file based on name of FIFO
Open lock file if it exists
Check whether process using it exists
If other process exists, it has control (exit with error, or wait for it to exit)
If other process is absent, remove lock file
At this point, lock file did not exist when last checked.
Try to create it with open() and O_EXCL amongst the other options.
If that works, your process created the file - you have permission to go ahead.
Write your PID to the file; close it.
Open the FIFO - use it.
When done (atexit()?) remove the lock file.
Worry about what happens if you open the lock file and read no PID...is it that another process just created it and hasn't yet written its PID into it, or did it die before doing so? Probably best to back off - close the file and try again (possibly after a randomized nanosleep()). If you get the empty file multiple times (say 3 in a row) assume that the process is dead and remove the lock file.
You could consider having the process that owns the file maintain an advisory lock on the file while it has the FIFO open. If the lock is absent, the process has died. There is still a TOCTOU (time of check, time of use) window of vulnerability between opening the file and applying the lock.
Take a good look at the open() man page on your system to see whether there are any other options to help you. Sometimes, processes use directories (mkdir()) instead of files because even root can't create a second instance of a given directory name, but then you have issues with how to know the PID of the process with the resource open, etc.

I'd definitely recommend using an actual mutex (as has been suggested in the comments); for example, the pthread library provides an implementation. But if you want to do it yourself using a file for educational purposes, I'd suggest taking a look at this answer I posted a while ago which describes a method for doing so in Python. Translated to C, it should look something like this (Warning: untested code, use at your own risk; also my C is rusty):
// each instance of the process should have a different filename here
char* process_lockfile = "/path/to/hostname.pid.lock";
// all processes should have the same filename here
char* global_lockfile = "/path/to/lockfile";
// create the file if necessary (only once, at the beginning of each process)
FILE* f = fopen(process_lockfile, "w");
fprintf(f, "\n"); // or maybe write the hostname and pid
fclose(f);
// now, each time you have to lock the file:
int lock_acquired = 0;
while (!lock_acquired) {
int r = link(process_lockfile, global_lockfile);
if (r == 0) {
lock_acquired = 1;
}
else {
struct stat buf;
stat(process_lockfile, &buf);
lock_acquired = (buf.st_nlink == 2);
}
}
// do your writing
unlink(global_lockfile);
lock_acquired = 0;

Your example is as good as you're going to get using flock (2) (which is after all, merely an "advisory" lock (which is to say not a lock at all, really)). The man page for it on my Mac OS X system has a couple of possibly important provisos:
Locks are on files, not file descriptors. That is, file descriptors duplicated through dup(2) or fork(2) do not result in multiple instances of a
lock, but rather multiple references to a single lock. If a process holding a lock on a file forks and the child explicitly unlocks the file, the
parent will lose its lock
and
Processes blocked awaiting a lock may be awakened by signals.
both of which suggest ways it could fail.
// would have been a comment, but I wanted to quote the man page at some length

Related

Is it possible to have a shared global variable for inter-process communication?

I need to solve a concurrency assignment for my operating systems class. I don't want the solution here, but I am lacking one part.
We should write a process that writes to file, reads from it and then deltetes it. This process we should run two times in two different shells. No fork here for simplicity. Process A should write, Process B then read and then Process should delete the file. Afterwards they switch roles.
I understand that you can achieve atomicity easily by locking. With while loops around the read-, and write sections etc. you can also get further control. But when I run process A and then process B, process B will spin before the write seciton until it achieves the lock and not got into reading when process A releases the lock. So my best guess is to have a read and a write lock. This information must be shared somehow between the processes. The only way I can think of is some global variable, but since both processes hold copies of the variables, I think this is not possible. Another way would be to have a read lock file and a write lock file, but that seems overly complicated to me.
Is there a better way?
You can use semaphores to ensure the writer and deleter wait for the previous process to finish its job. (Use man sem_init for details)
When running multiple processes with semaphores, it should be created using shared mem (man shm_open for more details).
You will need as many semaphores as the number of pipelines in this process.
You can use file as a lock. Two processes try to create a file with a previously agreed upon name using the O_EXCL flag. Only one will succeed. The one that succeeds gets the access to the resource. So in this case process A should try to create a file with name say, foo, with O_EXCL flag and, if successful, it should go ahead and write to file the information. After its work is complete, Process A should unlink foo. Process B should try to create file foo with O_EXCL flag, and if successful, try to read the file created by Process A. After its attempt is over, Process B should unlink the file foo. That way only one process will be accessing the file at any time.
Your problem (with files and alternating roles in the creation/deletion of files) seems to be a candidate to use the O_EXCL flag on opening/creating the file. This flag makes the open(2) system call to succeed in creating a file only if the file doesn't exist, so it makes the file to appear as a semaphore itself. Each process can liberate the lock (A or B) but the one that does, just liberates the lock and makes the role of owning again accessible.
You will see that both processes try to use one of the roles, but if they both try to use the owner role, one of them will succeed, and the other will fail.
Just enable a SIGINT signal handler on the owning process, to allow it to delete the file in case it gets signalled, or you will leave the file and after that no process will be able to assume the owning role (at least you will need to delete it manually).
This was the first form of locking feature in unix, long before semaphores, shared memory or other ways to block processes existed. It is based on the atomicity of system calls (you cannot execute two system calls on the same file simultaneously)

Is it safe to use fprintf from multiple processes without using any locking mechanism? [duplicate]

Here is process a and b, both of which are multithreaded.
a forks b and b immediatly execs one new program;
a dups and freopens stderr to the logfile (a is defacto apache's httpd2.22)
b inherits the opened stderr from a. ( i am adapting apache httpd, b is my program), and b uses fprintf(stderr....) for logging
so a, b share the same file for logging
there is no lock mechanism for a, b to write log
I found that some log msg are interleaving, and a little bit of log msg got lost.
Can the two writers to the same file implicitly lock each other out?
The more important question is: If we use fprintf only within one single multithreaded process, fprintf is thread safe, i.e. one call of fprintf won't intervene another fprintf call in another thread? Many articles said this, but this is not easy to ensure myself, so I ask for help here.
A: the code for duplicate the fd is like this:
......
rv = apr_file_dup2(stderr_log, s_main->error_log, stderr_p);//dup the stderr to the logfile
apr_file_close(s_main->error_log);//here ,2 fd point to the same file description,so close one of
then
B:apache it self use this manner for logging:
......
if (rv != APR_SUCCESS) {
ap_log_error(APLOG_MARK, APLOG_CRIT, rv, s_main, ".........");
C:for convenience,i logging in this way:
fprintf(stderr,".....\n")
I am quite sure apache and me use the same fd for file writing.
If you're using a single FILE object to perform output on an open file, then whole fprintf calls on that FILE will be atomic, i.e. lock is held on the FILE for the duration of the fprintf call. Since a FILE is local to a single process's address space, this setup is only possible in multi-threaded applications; it does not apply to multi-process setups where several different processes are accessing separate FILE objects referring to the same underlying open file. Even though you're using fprintf here, each process has its own FILE it can lock and unlock without the others seeing the changes, so writes can end up interleaved. There are several ways to prevent this from happening:
Allocate a synchronization object (e.g. a process-shared semaphore or mutex) in shared memory and make each process obtain the lock before writing to the file (so only one process can write at a time); OR
Use filesystem-level advisory locking, e.g. fcntl locks or the (non-POSIX) BSD flock interface; OR
Instead of writing directly to the log file, write to a pipe that another process will feed into the log file. Writes to a pipe are guaranteed (by POSIX) to be atomic as long as they are smaller than PIPE_BUF bytes long. You cannot use fprintf in this case (since it might perform multiple underlying write operations), but you could use snprintf to a PIPE_BUF-sized buffer followed by write.

Why should I close all file descriptors after calling fork() and prior to calling exec...()? And how would I do it?

I've seen a lot of C code that tries to close all file descriptors between calling fork() and calling exec...(). Why is this commonly done and what is the best way to do it in my own code, as I've seen so many different implementations already?
When calling fork(), your operation system creates a new process by simply cloning your existing process. The new process will be pretty much identical to the process it was cloned from, except for its process ID and any properties that are documented to be replaced or reset by the fork() call.
When calling any form of exec...(), the process image of the calling process is replaced by a new process image but other than that the process state is preserved. One consequence is that open file descriptors in the process file descriptor table prior to calling exec...() are still present in that table after calling it, so the new process code inherits access to them. I guess this has probably been done so that STDIN, STDOUT, and STDERR are automatically inherited by child processes.
However, keep in mind that in POSIX C file descriptors are not only used to access actual files, they are also used for all kind of system and network sockets, pipes, shared memory identifiers, and so on. If you don't close these prior to calling exec...(), your new child process will get access to all of them, even to those resources it could not gain access on its own as it doesn't even have the required access rights. Think about a root process creating a non-root child process, yet this child would have access to all open file descriptors of the root parent process, including open files that should only be writable by root or protected server sockets below port 1024.
So unless you want a child process to inherit access to currently open file descriptors, as may explicitly be desired e.g. to capture STDOUT of a process or feed data via STDIN to that process, you are required to close them prior to calling exec...(). Not only because of security (which sometimes may play no role at all) but also because otherwise the child process will have less free file descriptors available (and think of a long chain of processes, each opening files and then spawning a sub-process... there will be less and less free file descriptors available).
One way to do that is to always open files using the flag O_CLOEXEC, which ensures that this file descriptor is automatically closed when exec...() is ever called. One problem with that solution is that you cannot control how external libraries may open files, so you cannot rely that all code will always set this flag.
Another problem is that this solution only works for file descriptors created with open(). You cannot pass that flag when creating sockets, pipes, etc. This is a known problem and some systems are working around that by offering the non-standard acccept4(), pipe2(), dup3(), and the SOCK_CLOEXEC flag for sockets, however these are not yet POSIX standard and it's unknown if they will become standard (this is planned but until a new standard has been released we cannot know for sure, also it will take years until all systems have adopted them).
What you can do is to later on set the flag FD_CLOEXEC using fcntl() on the file descriptor, however, note that this isn't safe in a multi-thread environment. Just consider the following code:
int so = socket(...);
fcntl(so, F_SETFD, FD_CLOEXEC);
If another thread calls fork() in between the first and the second line, which is of course possible, the flag has not yet been set yet and thus this file descriptor won't get closed.
So the only way that is really safe is to explicitly close them and this is not as easy as it may seem!
I've seen a lot of code that does stupid things like this:
for (int i = STDERR_FILENO + 1; i < 256; i++) close(i);
But just because some POSIX systems have a default limit of 256 doesn't mean that this limit cannot be raised. Also on some system the default limit is always higher to begin with.
Using FD_SETSIZE instead of 256 is equally wrong as just because the select() API has a hard limit by default on most systems doesn't mean that a process cannot have more open file descriptors than this limit (after all you don't have to use select() with them, you can use poll() API as a replacement and poll() has no upper limit on file descriptor numbers).
Always correct is to use OPEN_MAX instead of 256 as that is really the absolute maximum of file descriptors a process can have. The downside is that OPEN_MAX can theoretically be huge and doesn't reflect the real current runtime limit of a process.
To avoid having to close too many non-existing file descriptors, you can use this code instead:
int fdlimit = (int)sysconf(_SC_OPEN_MAX);
for (int i = STDERR_FILENO + 1; i < fdlimit; i++) close(i);
sysconf(_SC_OPEN_MAX) is documented to update correctly if the open file limit (RLIMIT_NOFILE) has been raised using setrlimit(). The resource limits (rlimits) are the effective limits for a running process and for files they will always have to be between _POSIX_OPEN_MAX (documented as the minimum number of file descriptors a process is always allowed to open, must be at least 20) and OPEN_MAX (must be at least _POSIX_OPEN_MAX and sets the upper limit).
While closing all possible descriptors in a loop is technically correct and will work as desired, it may try to close several thousand file descriptors, most of them will often not exist. Even if the close() call for a non-existing file descriptor is fast (which is not guaranteed by any standard), it may take a while on weaker systems (think of embedded devices, think of small single-board computers), which may be a problem.
So several systems have developed more efficient ways to solve this issue. Famous examples are closefrom() and fdwalk() which BSD and Solaris systems support. Unfortunately The Open Group voted against adding closefrom() to the standard (quote): "it is not possible to standardize an interface that closes arbitrary file descriptors above a certain value while still guaranteeing a conforming environment." (Source) This is of course nonsense, as they make the rules themselves and if they define that certain file descriptors can always be silently omitted from closing if the environment or system requires or the code itself requests that, then this would break no existing implementation of that function and still offer the desired functionality for the rest of us. Without these functions people will use a loop and do exactly what The Open Group tries to avoid here, so not adding it only makes the situation even worse.
On some platforms you are basically out of luck, e.g. macOS, which is fully POSIX conform. If you don't want to close all file descriptors in a loop on macOS, your only option is to not use fork()/exec...() but instead posix_spawn(). posix_spawn() is a newer API for platforms that don't support process forking, it can be implemented purely in user space on top of fork()/exec...() for those platforms that do support forking and can otherwise use some other API a platform offers for starting child processes. On macOS there exists a non-standard flag POSIX_SPAWN_CLOEXEC_DEFAULT, which will tread all file descriptors as if the CLOEXEC flag has been set on them, except for those for that you explicitly specified file actions.
On Linux you can get a list of file descriptors by looking at the path /proc/{PID}/fd/ with {PID} being the process ID of your process (getpid()), that is, if the proc file system has been mounted at all and it has been mounted to /proc (but a lot of Linux tools rely on that, not doing so would break many other things as well). Basically you can limit yourself to close all descriptors listed under this path.
True story: Once upon a time I wrote a simple little C program that opened a file, and I noticed that the file descriptor returned by open was 4. "That's funny," I thought. "Standard input, output, and error are always file descriptors 0, 1, and 2, so the first file descriptor you open is usually 3."
So I wrote another little C program that started reading from file descriptor 3 (without opening it, that is, but rather, assuming that 3 was a pre-opened fd, just like 0, 1, and 2). It quickly became apparent that, on the Unix system I was using, file descriptor 3 was pre-opened on the system password file. This was evidently a bug in the login program, which was exec'ing my login shell with fd 3 still open on the password file, and the stray fd was in turn being inherited by programs I ran from my shell.
Naturally the next thing I tried was a simple little C program to write to the pre-opened file descriptor 3, to see if I could modify the password file and give myself root access. This, however, didn't work; the stray fd 3 was opened on the password file in read-only mode.
But at any rate, this helps to explain why you shouldn't leave file descriptors open when you exec a child process.
[Footnote: I said "true story", and it mostly is, but for the sake of the narrative I did change one detail. In fact, the buggy version of /bin/login was leaving fd 3 opened on the groups file, /etc/group, not the password file.]

File permissions for a process in C

If a process where to create a file and closes it:
void function_procA (void) {
FILE *G_fp = NULL;
G_fp = fopen("/var/log/file.log", "w");
fclose(G_fp);
}
could another process open a pointer to that file and start writing to it?
void function_procB (void) {
FILE *G_fp = NULL;
G_fp = fopen("/var/log/file.log", "w");
fprintf(G_fp, "Hello, World!\n");
fclose(G_fp);
}
In short: what are file permissions between different processes? And if only one process gets exclusive right to write to the file by default, how do I change the permissions such that the other process has rights to write to it?
Thanks.
That would become a data-race.
Can be avoided very easily with file locking:
#include <sys/file.h>
flock(fileno(fp), LOCK_SH); // shared lock for reading
flock(fileno(fp), LOCK_EX); // exlusive lock
flock(fileno(fp), LOCK_UN); // release lock
Above example works on linux, no idea about windows, though. The flock is just a wrapper for fcntl system call.
If both process are ran by the same user it should work.
I think this Wikipedia article has a good description.
Quoting:
Unix-like operating systems (including Linux and Apple's OS X) do not
normally automatically lock open files or running programs. Several
kinds of file-locking mechanisms are available in different flavors of
Unix, and many operating systems support more than one kind for
compatibility. The two most common mechanisms are fcntl(2) and
flock(2). A third such mechanism is lockf(3), which may be separate or
may be implemented using either of the first two primitives. Although
some types of locks can be configured to be mandatory, file locks
under Unix are by default advisory. This means that cooperating
processes may use locks to coordinate access to a file among
themselves, but uncooperative processes are also free to ignore locks
and access the file in any way they choose. In other words, file locks
lock out other file lockers only, not I/O.

dup() followed by close() from multiple threads or processes

My program does the following in chronological order
The program is started with root permissions.
Among other tasks, A file only readable with root permissions is open()ed.
Root privileges are dropped.
Child processes are spawned with clone() and the CLONE_FILES | CLONE_FS | CLONE_IO flags set, which means that while they use separate regions of virtual memory, they share the same file descriptor table (and other IO stuff).
All child processes execve() their own programs (the FD_CLOEXEC flag is not used).
The original program terminates.
Now I want every spawned program to read the contents of the aforementioned file, but after they all have read the file, I want it to be closed (for security reasons).
One possible solution I'm considering now is having a step 3a where the fd of the file is dup()licated once for every child process, and each child gets its own fd (as an argv). Then every child program would simply close() their fd, so that after all fds pointing to the file are close()d the "actual file" is closed.
But does it work that way? And is it safe to do this (i.e. is the file really closed)? If not, is there another/better method?
While using dup() as I suggested above is probably just fine, I've now --a day after asking this SO question-- realized that there is a nicer way to do this, at least from the point of view of thread safety.
All dup()licated file descriptors point to the same same file position indicator, which of course means you run into trouble when multiple threads/processes might simultaneously try to change the file position during read operations (even if your own code does so in a thread safe way, the same doesn't necessarily go for libraries you depend on).
So wait, why not just call open() multiple times (once for every child) on the needed file before dropping root? From the manual of open():
A call to open() creates a new open file description, an entry in the system-wide table of open files. This entry records the file offset and the file status flags (modifiable via the fcntl(2) F_SETFL operation). A file descriptor is a reference to one of these entries; this reference is unaffected if pathname is subsequently removed or modified to refer to a different file. The new open file description is initially not shared with any other process, but sharing may arise via fork(2).
Could be used like this:
int fds[CHILD_C];
for (int i = 0; i < CHILD_C; i++) {
fds[i] = open("/foo/bar", O_RDONLY);
// check for errors here
}
drop_privileges();
// etc
Then every child gets a reference to one of those fds through argv and does something like:
FILE *stream = fdopen(atoi(argv[FD_STRING_I]), "r")
read whatever needed from the stream
fclose(stream) (this also closes the underlying file descriptor)
Disclaimer: According to a bunch of tests I've run this is indeed safe and sound. I have however only tested open()ing with O_RDONLY. Using O_RDWR or O_WRONLY may or may not be safe.

Resources