What is the difference between locking with `fcntl` and `flock`? - c

I'm reading for hours but can't understand what is the difference between the two locks. The only thing I understand is that fcntl() lock is offering a granular lock that can lock specific bytes and that only fcntl() supports NFS locking.
It's said that the difference is in their semantics, how do they behave when being duplicated by dup() or while fork(), but I can't understand what is the difference in practice.
My scenario is that I'm writing to a log file in a fork() based server, where every forked process is writing to the same file when something happens. Why would I want to use flock() and why would I want to use fcntl() locks?

I have tried to figure out the differences based on available documentation and took the following conclusions (please correct me if I am wrong):
With fcntl() (POSIX):
you create a lock record on the file at filesystem level including process id.
If the process dies or closes any filedescriptor to this file, the lock record gets removed by the system.
A request for an exclusive lock shall fail if the file descriptor was not opened with write access.
simply: fnctl locks work as a Process <--> File relationship, ignoring filedescriptors
flock() (BSD) is different (Linux: since kernel 2.0, flock() is implemented as a system call in its own right rather than being emulated in the GNU C library as a call to fcntl):
flock() creates locks on systems's "Open file descriptions". "Open file descriptions" are generated by open() calls.
a filedescriptor (FD) is a reference to a "Open file description". FDs generated by dup() or fork() refer to the same "Open file description".
a process may generate multiple "Open file descriptions" for one file by opening() the file multiple times
flock() places it's locks via a FD on a "Open file description"
therefore flock() may be used to synchronize file access among processes as well as threads (in one ore more processes).
see flock(2) and especially open(2) man pages for details on "Open file descriptions".
In Your scenario you probably want to use fcntl() based locks, because your forked processes will open() the logfile on their own and do not expect to inherit a filedescriptor with a possibly placed lock.
If you need synchronisation among multiple threads, possibly in more than one process, you should use flock() based locks if your system supports them without emulation by fcntl(). Then every thread needs to open() the file rather than using dup()ed or fork()ed handles.
Edit 2022: An excellent write-up and additional thoughts here: https://lwn.net/Articles/586904/

Related

Race condition during file write

Suppose two different processes open the same file independently, and so have different entries in the Open file table (system-wide). But they refer to the same i-node entry.
As the file descriptors refer to the different entries in the Open file table (system-wide), then they may have different file offset. Will be there any chance for race condition during write as the file offset is different? And how does the kernel avoid it?
Book: The Linux Programming Interface; Page no. 95; Chapter-5 (File I/O: Further details); Section 5.4
(I'm assuming because you used write() that the question refers to POSIX systems.)
Each write() operation is supposed to be fully atomic, assuming a POSIX system (presumed from the use of write()).
Per POSIX 7's 2.9.7 Thread Interactions with Regular File Operations:
All of the following functions shall be atomic with respect to each
other in the effects specified in POSIX.1-2017 when they operate on
regular files or symbolic links:
chmod()
chown()
close()
creat()
dup2()
fchmod()
fchmodat()
fchown()
fchownat()
fcntl()
fstat()
fstatat()
ftruncate()
lchown()
link()
linkat()
lseek()
lstat()
open()
openat()
pread()
read()
readlink()
readlinkat()
readv()
pwrite()
rename()
renameat()
stat()
symlink()
symlinkat()
truncate()
unlink()
unlinkat()
utime()
utimensat()
utimes()
write()
writev()
If two threads each call one of these functions, each call shall
either see all of the specified effects of the other call, or none of
them. The requirement on the close() function shall also apply
whenever a file descriptor is successfully closed, however caused (for
example, as a consequence of calling close(), calling dup2(), or of
process termination).
But pay particular attention to the specification for write() (bolding mine):
The write() function shall attempt to write nbyte bytes ...
POSIX says that write() calls to a file shall be atomic. POSIX does not say that the write() calls will be complete. Here's a Linux bug report where a signal was interrupting a write() that was partially complete. Note the explanation:
Now this is perfectly valid behavior as far as spec (POSIX, SUS,...) is concerned (please correct me if I'm missing something). So I'd say the program is incorrect. But OTOH I agree that this was not possible before a50527b1 and we don't want to break userspace. I'd hate to revert that commit since it allows us to interrupt processes doing large writes (especially when something goes wrong) but if you explain to us why this behavior is a problem for you then I guess I'll have to revert it.
That's all but admitting that there's a POSIX requirement for write() calls to be atomic, if not complete, with an offer to revert back to earlier behavior where the write() calls apparently were all also complete in this same circumstance.
Note, though, there are lots of file systems out there that don't conform to POSIX standards.
As the file descriptors refer to the different entries in the Open file table (system-wide), then they may have different file offset. Will be there any chance for race condition during write as the file offset is different?
Any write() in Linux can return a short count, for example due to a signal being delivered to an userspace handler. For simplicity, let's ignore that, and only consider what happens to the successfully written data.
There are two scenarios:
The regions written to do not overlap.
(For example, one process writes 100 bytes starting at offset 23, and another writes 50 bytes starting at offset 200.)
There is no race condition in this case.
The regions written to do overlap.
(For example, one process writes 100 bytes starting at offset 50, and another writes 10 bytes starting at offset 70.)
There is a race condition. It is impossible to predict (without advisory locks etc.) the order in which the data gets updated.
Depending on the target filesystem, and if the writes are large enough (so that paging effects can be observed), the two writes may even be "mixed" (in page-sized chunks) in Linux on some filesystems on machines with more than one hardware thread, even though POSIX says this shouldn't happen.
Normally, writes go through the Linux page cache. It is possible for one of the processes to have opened the file with O_DIRECT | O_SYNC, bypassing the page cache. In that case, there are many additional corner cases that can occur. Specifically, even if you use a shared clock source, and can show that the normal/page-cached write completed before the direct write call was made, it may still be possible for the page-cached write to overwrite the direct write contents.
And how does the kernel avoid it?
It doesn't. Why should it? POSIX says each write is atomic, but there is no practical way to avoid a race condition relying on that alone (and get consistent and expected results).
Userspace programs have at least four different methods to avoid such races:
Advisory file locks on the entire open file using the flock() interface.
Advisory file locks on the entire open file using the lockf() interface. In Linux, these are just shorthand for placing/removing fcntl() advisory locks on the entire file.
Advisory record locks on the file using the fcntl() interface. This works even across shared volumes, as long as the file server is configured to support file locking.
Obtaining an exclusive lease on the open file using the fcntl() interface.
Advisory file locks are like street lights: they are intended for co-operating processes to easily determine who gets to go when. However, they do not stop any other process from actually ignoring the "lock" and accessing the file.
File leases are a mechanism, where one or more processes can get a read lease at the same time on the same file, but only one process can get a write lease and only when that process is the only one having the file open. When granted, the write lease (or exclusive lease) means that if any other process tries to open the same file, the lease owner process is notified by a signal (that you can control using the fcntl() interface), and has a configured time (typically 45 seconds; see man 5 proc and /proc/sys/fs/lease-break-time, in seconds) to relinguish the lease. The opener is blocked in the kernel until the lease is downgraded or the lease break time passes, in which case the kernel breaks the lease.
This allows the lease holder to postpone the opening for a short while.
However, the lease holder cannot block the opening, and cannot e.g. replace the file with a decoy one; the opener already has a hold on the inode, and the lease break time is just a grace period for cleanup work.
Technically, a fifth method would be mandatory file locking, but aside from the kernel use wrt. executed binaries, they're not used, and are actually buggy in Linux anyway. In Linux, inodes are only locked against modification when that inode is being executed as a binary by the kernel. (You can still rename or delete the original file, and create a new one, so that any subsequent execs will execute the modified/new data. Attempts to modify a file that is being executed as a binary file will fail with error EBUSY.)

fcntl not working (doesn't lock the file) in multi-threaded programme

fcntl using code
Hi. I'm trying to access a file with multiple threads,
trying to get synchronization with record lock(fcntl).
The problem is, fcntl doesn't lock the file.
result
I've tried:
each threads to have own file descriptor/one file descriptor(global),
checked the parameters of fcntl, but no reason or solution found.
Is there anything wrong with the function I've write? or maybe something to know when using fcntl in multi-threads?
fcntl implements process-level locking. Apparently, all your threads live in the same process, so there's no in-between locks (or, put another way: All threads within a process share the same locks).
The Linux man page says:
The threads in a process share locks. In other words, a
multithreaded program can't use record locking to ensure that
threads don't simultaneously access the same region of a file.

Is it possible to have a shared global variable for inter-process communication?

I need to solve a concurrency assignment for my operating systems class. I don't want the solution here, but I am lacking one part.
We should write a process that writes to file, reads from it and then deltetes it. This process we should run two times in two different shells. No fork here for simplicity. Process A should write, Process B then read and then Process should delete the file. Afterwards they switch roles.
I understand that you can achieve atomicity easily by locking. With while loops around the read-, and write sections etc. you can also get further control. But when I run process A and then process B, process B will spin before the write seciton until it achieves the lock and not got into reading when process A releases the lock. So my best guess is to have a read and a write lock. This information must be shared somehow between the processes. The only way I can think of is some global variable, but since both processes hold copies of the variables, I think this is not possible. Another way would be to have a read lock file and a write lock file, but that seems overly complicated to me.
Is there a better way?
You can use semaphores to ensure the writer and deleter wait for the previous process to finish its job. (Use man sem_init for details)
When running multiple processes with semaphores, it should be created using shared mem (man shm_open for more details).
You will need as many semaphores as the number of pipelines in this process.
You can use file as a lock. Two processes try to create a file with a previously agreed upon name using the O_EXCL flag. Only one will succeed. The one that succeeds gets the access to the resource. So in this case process A should try to create a file with name say, foo, with O_EXCL flag and, if successful, it should go ahead and write to file the information. After its work is complete, Process A should unlink foo. Process B should try to create file foo with O_EXCL flag, and if successful, try to read the file created by Process A. After its attempt is over, Process B should unlink the file foo. That way only one process will be accessing the file at any time.
Your problem (with files and alternating roles in the creation/deletion of files) seems to be a candidate to use the O_EXCL flag on opening/creating the file. This flag makes the open(2) system call to succeed in creating a file only if the file doesn't exist, so it makes the file to appear as a semaphore itself. Each process can liberate the lock (A or B) but the one that does, just liberates the lock and makes the role of owning again accessible.
You will see that both processes try to use one of the roles, but if they both try to use the owner role, one of them will succeed, and the other will fail.
Just enable a SIGINT signal handler on the owning process, to allow it to delete the file in case it gets signalled, or you will leave the file and after that no process will be able to assume the owning role (at least you will need to delete it manually).
This was the first form of locking feature in unix, long before semaphores, shared memory or other ways to block processes existed. It is based on the atomicity of system calls (you cannot execute two system calls on the same file simultaneously)

Confusing documentation of flock(2)

If a process uses open(2) (or similar) to obtain more than one
descriptor for the same file, these descriptors are treated
independently by flock(). An attempt to lock the file using one of
these file descriptors may be denied by a lock that the calling
process has already placed via another descriptor.
If flock() treats the descriptors independently, why locking the file using one of the file descriptors would be denied by a lock placed via another descriptor? What does independent here mean?
Also if I unlock one of the descriptor, would other descriptors unlock as well?
treated independently by flock() means that flock() will not "ask" one descriptor, when attempting to modify the other. However, it doesn't mean they are truly independent. If flock() tries to lock one, while the other is already locked, this attempt may block.
Think of it as 2-levels mechanism. flock() looks at only one descriptor at a time, but eventually, upon the lock attempt, the system tries to move to the dipper level and actually lock, and there the problem occurs.
Also if I unlock one of the descriptor, would other descriptors unlock as well?
I'm not sure. This quote (below) states that this indeed is the case if a file has multiple descriptors from fork(2), dup(2). However there is nothing that says so in the 2nd paragraph that treats multiple open(2) which leads me to believe that it is just not a good thing to do :)
From here:
Locks created by flock() are associated with an open file description
(see open(2)). This means that duplicate file descriptors (created
by, for example, fork(2) or dup(2)) refer to the same lock, and this
lock may be modified or released using any of these file descriptors.
Furthermore, the lock is released either by an explicit LOCK_UN
operation on any of these duplicate file descriptors, or when all
such file descriptors have been closed.
If a process uses open(2) (or similar) to obtain more than one file
descriptor for the same file, these file descriptors are treated
independently by flock(). An attempt to lock the file using one of
these file descriptors may be denied by a lock that the calling
process has already placed via another file descriptor.
Suppose your process has two file descriptors, fd1 and fd2, that operate on the same file. If you lock a segment of the file on fd1, and then lock another overlapping segment also on fd1, the two locks won't interfere with each other because they're on the same file descriptor.
However, if the second lock was applied on fd2 instead of fd1, then the locks would be overlapping and the second lock would be deemed to interfere with first and would fail, despite the fact that it is the same process doing the locking.
This is the sense in which the locks on the file descriptors are independent of each other — the locking system doesn't check which process owns the interfering locks on a different file descriptor; it is sufficient that it is not the current file descriptor.
When you unlock one descriptor, you don't change the locks on any other file descriptor.

What is the order in which a POSIX system clears the file locks that were not unlocked cleanly?

The POSIX specification for fcntl() states:
All locks associated with a file for a given process shall be removed when a file descriptor for that file is closed by that process or the process holding that file descriptor terminates.
Is this operation of unlocking the file segment locks that were held by a terminated process atomic per-file? In other words, if a process had locked byte segments B1..B2 and B3..B4 of a file but did not unlock the segments before terminating, when the system gets around to unlocking them, are segments B1..B2 and B3..B4 both unlocked before another fcntl() operation to lock a segment of the file can succeed? If not atomic per-file, does the order in which these file segments are unlocked by the system depend on the order in which the file segments were originally acquired?
The specification for fcntl() does not say, but perhaps there is a general provision in the POSIX specification that mandates a deterministic order on operations to clean up after a process that exits uncleanly or crashes.
There's a partial answer in section 2.9.7, Thread Interactions with Regular File Operations, of the POSIX specification:
All of the functions chmod(), close(), fchmod(), fcntl(), fstat(), ftruncate(), lseek(), open(), read(), readlink(), stat(), symlink(), and write() shall be atomic with respect to each other in the effects specified in IEEE Std 1003.1-2001 when they operate on regular files. If two threads each call one of these functions, each call shall either see all of the specified effects of the other call, or none of them.
So, for a regular file, if a thread of a process holds locks on segments of a file and calls close() on the last file descriptor associated with the file, then the effects of close() (including removing all outstanding locks on the file that are held by the process) are atomic with respect to the effects of a call to fcntl() by a thread of another process to lock a segment of the file.
The specification for exit() states:
These functions shall terminate the calling process with the following consequences:
All of the file descriptors, directory streams[, conversion descriptors, and message catalog descriptors] open in the calling process shall be closed.
...
Presumably, open file descriptors are closed as if by appropriate calls to close(), but unfortunately the specification does not say how open file descriptors are "closed".
The 2004 specification seems even more vague when it comes to the steps of abnormal process termination. The only thing I could find is the documentation for abort(). At least with the 2008 specification, there is a section titled Consequences of Process Termination on the page for _Exit(). The wording, though, is still:
All of the file descriptors, directory streams, conversion descriptors, and message catalog descriptors open in the calling process shall be closed.
UPDATE: I just opened issue 0000498 in the Austin Group Defect Tracker.
I don't think the POSIX specification stipulates whether the releasing of locks is atomic or not, so you should assume that it behaves as inconveniently as possible for you. If you need them to be atomic, they aren't; if you need them to be handled separately, they're atomic; if you don't care, some machines will do it one way and other machines the other way. So, write your code so that it doesn't matter.
I'm not sure how you'd write code to detect the problem.
In practice, I expect that the locks would be released atomically, but the standard doesn't say, so you should not assume.

Resources