My understanding of FUSE's multithreaded read cycle is something like this:
....
.-> read --.
/ \
open ---> read ----+-> release
\ /
`-> read --'
....
i.e., Once a file is open'd, multiple read threads are spawned to read different chunks of the file. Then, when everything that was wanted has been read, there is a final, single release. All these are per ones definition of open, read and release as FUSE operations.
I'm creating an overlay filesystem which converts one file type to another. Clearly, random access without any kind of indexing is a problem; so for the time being, I'm resorting to streaming. That is, in the above model, each read thread would begin the conversion process from the start, until it arrives at the correct bit of converted data to push out into the read buffer.
This is obviously very inefficient! To resolve this, a single file conversion process can start at the open stage and use a mutex and read cursor (i.e., "I've consumed this much, so far") that the reader threads can use to force sequential access. That is, the mutex gets locked by the thread that requests the data from the current cursor position and all other reader threads have to wait until it's their turn.
I don't see why this wouldn't work for streaming a file out. However, if any random/non-sequential access occurs we'll have a deadlock: if the requested offset is beyond or before the current cursor position, the mutex will never unlock for the appropriate reader to be able to reach that point. (Presumably we have no access to FUSE's threads, so to act as a supervisor. Also, I can't get around the problem by forcing the file to be a FIFO, as FUSE doesn't support writing to them.)
Indeed, I would only be able to detect when this happens if the mutex is locked and the cursor is beyond the requested offset (i.e., the "can't rewind" situation). In that case, I can return EDEADLK, but there's no way to detect "fast-forward" requests that can't be satisfied.
At the risk of the slightly philosophical question... What do I do?
Related
We need to address some performance issues on our application. Using Visual Studio's diagnostic hub, we discovered that file operations were responsible for our issue.
We are using a file to save critical data temporarily that has to be sent via TCP/IP to a server.
The file consists of a header where information is stored, where to find a specific data set (offset and length of up to 2500 data sets in this file). The second part is the data itself, addressed by the information in the header.
When data is written to the file, first the header is read, then data is written to the next empty location, and the information in the header is updated and written again.
When data needs to be sent (in the second thread), the header is read to obtain the next offset and length of data to be read. Then the data is read and sent via TCP/IP. Until we receive the TCP/IP ACK of the server, the header is not updated again. When the ACK arrives, the header information is updated to reflect that the used dataset is now empty.
All functions involved in this process made a fopen() and fclose() to the file, which caused our performance issues.
I know that there must be some synchronization when writing the header, because when the first thread puts in new data, the second thread must obey the new write pointer in the file (if not, the newly written data will be lost).
If I am able to sync access to the file pointer, so that any time one thread is writing at a specific location, the other thread would not write too, is it safe to use the FILE* variable in those two threads?
Or, would it be better to have two FILE* variables that read and write on their own portions of the file?
I know, that I could combine both operations into one thread, but then I must address delays from synchronous socket operations.
To address the performance issue, I rewrote my code, and now I am opening the file only once per thread, and do seek/read/write operations on the same file pointer. So there is no overhead on the file operations caused by repeatedly opening and closing the file.
Problem solved - at least for now.
I'm trying to implement an atomic version of copy on write. I have certain conditions if met that will make a copy of the original file.
I implemented something like this pseudo code.
//write operations//
if(some condition)
//create a temp file//
rename(srcfile, copied-version)
rename(tmpfile, srcfile)
problem with this logic :
Hardlinks.
I want to transfer the Hardlink from copied version to new srcfile.
You can't.
Hardlinks are one directional pointers. So you can't modify or remove other hardlinks that you don't explicitly know about. All you can do is write to the same file data, and that's not atomic.
This rule applies uniformly to both hadlinks and file descriptors. What that means is that you can't modify the content pointed to by an unknown hardlink and not modify the content pointed to by another process with the same file open.
That effectively prevents you from modifying the file an unknown hardlink points
to atomically.
If you have control over every process which might modify or access these files (if they are only modified by programs you've written), then you might be able to use flock() to signal to other processes that the file is in use. This won't work if the file is stored on an NFS remote file system, but should generally work otherwise.
In some cases, file leases can be a solution to the underlying issue – ensuring atomic content updates – but only if each reader and writer opens and closes the file for each snapshot.
Because a similar limitation happens for the traditional copy–update–rename-over sequence, perhaps the file lease solution would also work for OP.
For details, see man 2 fcntl Leases and Managing signals sections. The process must either have the same owner as the file, or have the CAP_LEASE capability (usually granted to the process via filesystem capabilities). Superuser processes (running as root) have the capability by default.
The idea is that when the process wishes to make "atomic" changes to the file, it acquires a write lease on the file. This only succeeds if no other process has the file open. If another process tries to open the file, the lease holder receives a signal, and has up to lease-break-time (about a minute, typically) to downgrade the lease (or simply close the file); during that time, the opener will block.
Note that there is no way to divert the opener. The situation is that the opener already has a handle to the underlying inode (so access checks and filename resolution has already occurred); it is just that kernel won't return it to the userspace process before the lease is released or broken.
Your lease owner can, however, create a copy of the current contents to a temporary file, acquiring a write lease on that as well, and then rename it over the target file name. This way, each (set of) opener(s) obtain a handle to the file contents as they were at the time of the opening; if they do any modifications, they will be "private", and not reflected on the original file. Since the underlying inode is no longer referred to by any filename, when they (the last process having it open) close it, the inode is deleted and the storage released back to the file system. The Linux page cache also caches such accesses very well, so in many cases the "temporary copy file" never even hits actual storage media (unless there is memory pressure, i.e. memory needed for non-pagecache purposes).
A pure "atomic modification" does not require any kind of copies or renames, only holding the lease for the duration of the set of writes that must appear atomic for the readers.
Note that taking a write lease will normally block until no other process has the file open any longer, so the time at which such a lease-based atomic update can occur, is restricted, and not guaranteed to be always available. (For example, you may have a lazy process that just keeps the file open, and occasionally polls it. If you have such processes, this lease-based approach won't work – but nor would the copy–rename-over approach either.)
Also, leases work only on local files.
If you need record-based atomicity, just use fcntl-based record locks, and have all readers take a read-lock for the region they want to access atomically, and all writers take a write-lock for the region to be updated, as record-locks are advisory (i.e., do not block reads or writes, only other record locks).
So let's say I have the following code where I open a file, read the contents line by line and then use each line for a function somewhere else and then when I'm done rewind the file.
FILE *file = Open_File();
char line[max];
while (!EndofFile())
{
int length = GetLength(line);
if (length > 0)
{
DoStuffToLine(line)
}
}
rewind(file);
I'm wondering if there is a way to use threads here to add concurrency. Since I'm just reading the file and not writing to it I feel like I don't have to worry about race conditioning. However I'm not sure how to handle the code that's in the while loop because if one thread is looping over the file and another thread is looping over the file at the same time, would they cause each other to skip over lines, make other errors, etc? What's a good way to approach this?
If you're trying to do this to improve read performance, you're going to likely be disappointed since this will almost surely be disk I/O bound. Adding more threads won't help the OS and disk controller fetch data any faster.
However, if you're trying to just process the data in parallel, that's another matter. In that case, I would read the entire file into a memory buffer somewhere, then have your threads process it in parallel. That way you don't have to worry about thread safety with rewinding the file pointer or any other annoying issues like it.
You'll likely still need to use other locking mechanisms for the multithreaded parts of course, depending on exactly what you're doing, but you shouldn't have to worry about what the standard library is going to do when you start accessing a file with multiple threads.
The concurrency adds some race condition problems:
1. The EndofFile() function is evaluated at the start of the loop, it may always happens that this function returns true for two threads, then one thread reaches the end of file and the other thread attempts to read the file.You never know when a thread may be in execution;
2. Same is valid for the GetLength function: when a thread has the length information, the length may change because another thread may read another line;
3. You are reading a file sequentially, even if you rewind it, it may always occur that the current position of the IO pointer is altered by some other thread.
Furthermore, as Telgin pointed out, reading a file is not CPU bound, but I/O bound, so is the system to read the file.You can't improve the performance because you need some locks, and locking to guarantee thread safety just introduces overhead.
I'm not sure that this is the best approach. However, you could read the file. Then store it in two separate objects and read the objects instead of the file. Just make sure to do cleanup afterward.
I need to write something like 64 kB of data atomically in the middle of an existing file. That is all, or nothing should be written. How to achieve that in Linux/C?
I don't think it's possible, or at least there's not any interface that guarantees as part of its contract that the write would be atomic. In other words, if there is a way that's atomic right now, that's an implementation detail, and it's not safe to rely on it remaining that way. You probably need to find another solution to your problem.
If however you only have one writing process, and your goal is that other processes either see the full write or no write at all, you can just make the changes in a temporary copy of the file and then use rename to atomically replace it. Any reader that already had a file descriptor open to the old file will see the old contents; any reader opening it newly by name will see the new contents. Partial updates will never be seen by any reader.
There are a few approaches to modify file contents "atomically". While technically the modification itself is never truly atomic, there are ways to make it seem atomic to all other processes.
My favourite method in Linux is to take a write lease using fcntl(fd, F_SETLEASE, F_WRLCK). It will only succeed if fd is the only open descriptor to the file; that is, nobody else (not even this process) has the file open. Also, the file must be owned by the user running the process, or the process must run as root, or the process must have the CAP_LEASE capability, for the kernel to grant the lease.
When successful, the lease owner process gets a signal (SIGIO by default) whenever another process is opening or truncating the file. The opener will be blocked by the kernel for up to /proc/sys/fs/lease-break-time seconds (45 by default), or until the lease owner releases or downgrades the lease or closes the file, whichever is shorter. Thus, the lease owner has dozens of seconds to complete the "atomic" operation, without any other process being able to see the file contents.
There are a couple of wrinkles one needs to be aware of. One is the privileges or ownership required for the kernel to allow the lease. Another is the fact that the other party opening or truncating the file will only be delayed; the lease owner cannot replace (hardlink or rename) the file. (Well, it can, but the opener will always open the original file.) Also, renaming, hardlinking, and unlinking/deleting the file does not affect the file contents, and therefore are not affected at all by file leases.
Remember also that you need to handle the signal generated. You can use fcntl(fd, F_SETSIG, signum) to change the signal. I personally use a trivial signal handler -- one with an empty body -- to catch the signal, but there are other ways too.
A portable method to achieve semi-atomicity is to use a memory map using mmap(). The idea is to use memmove() or similar to replace the contents as quickly as possible, then use msync() to flush the changes to the actual storage medium.
If the memory map offset in the file is a multiple of the page size, the mapped pages reflect the page cache. That is, any other process reading the file, in any way -- mmap() or read() or their derivatives -- will immediately see the changes made by the memmove(). The msync() is only needed to make sure the changes are also stored on disk, in case of a system crash -- it is basically equivalent to fsync().
To avoid preemption (kernel interrupting the action due to the current timeslice being up) and page faults, I'd first read the mapped data to make sure the pages are in memory, and then call sched_yield(), before the memmove(). Reading the mapped data should fault the pages into page cache, and sched_yield() releases the rest of the timeslice, making it extremely likely that the memmove() is not interrupted by the kernel in any way. (If you do not make sure the pages are already faulted in, the kernel will likely interrupt the memmove() for each page separately. You won't see that in the process, but other processes see the modifications to occur in page-sized chunks.)
This is not exactly atomic, but it is practical: it does not give you any guarantees, only makes the race window very very short; therefore I call this semi-atomic.
Note that this method is compatible with file leases. One could try to take a write lease on the file, but fall back to leaseless memory mapping if the lease is not granted within some acceptable time period, say a second or two. I'd use timer_create() and timer_settime() to create the timeout timer, and the same empty-body signal handler to catch the SIGALRM signal; that way the fcntl() is interrupted (returns -1 with errno == EINTR) when the timeout occurs -- with the timer interval set to some small value (say 25000000 nanoseconds, or 0.025 seconds) so it repeats very often after that, interrupting syscalls if the initial interrupt is missed for any reason.
Most userspace applications create a copy of the original file, modify the contents of the copy, then replace the original file with the copy.
Each process that opens the file will only see complete changes, never a mix of old and new contents. However, anyone keeping the file open, will only see their original contents, and not be aware of any changes (unless they check themselves). Most text editors do check, but daemons and other processes do not bother.
Remember that in Linux, the file name and its contents are two separate things. You can open a file, unlink/remove it, and still keep reading and modifying the contents for as long as you have the file open.
There are other approaches, too. I do not want to suggest any specific approach, because the optimal one depends heavily on the circumstances: Do the other processes keep the file open, or do they always (re)open it before reading the contents? Is atomicity preferred or absolutely required? Is the data plain text, structured like XML, or binary?
EDITED TO ADD:
Please note that there are no ways to guarantee beforehand that the file will be successfully modified atomically. Not in theory, and not in practice.
You might encounter a write error with the disk full, for example. Or the drive might hiccup at just the wrong moment. I'm only listing three practical ways to make it seem atomic in typical use cases.
The reason write leases are my favourite is that I can always use fcntl(fd,F_GETLEASE,&ptr) to check whether the lease is still valid or not. If not, then the write was not atomic.
High system load is unlikely to cause the lease to be broken for a 64k write, if the same data has been read just prior (so that it will likely be in page cache). If the process has superuser privileges, you can use setpriority(PRIO_PROCESS,getpid(),-20) to temporarily raise the process priority to maximum while taking the file lease and modifying the file. If the data to be overwritten has just been read, it is extremely unlikely to be moved to swap; thus swapping should not occur, either.
In other words, while it is quite possible for the lease method to fail, in practice it is almost always successful -- even without the extra tricks mentioned in this addendum.
Personally, I simply check if the modification was not atomic, using the fcntl() call after the modification, prior to msync()/fsync() (making sure the data hits the disk in case a power outage occurs); that gives me an absolutely reliable, trivial method to check whether the modification was atomic or not.
For configuration files and other sensitive data, I too recommend the rename method. (Actually, I prefer the hardlink approach used for NFS-safe file locking, which amounts to the same thing but uses a temporary name to detect naming races.) However, it has the problem that any process keeping the file open will have to check and reopen the file, voluntarily, to see the changed contents.
Disk writes cannot be atomic without a layer of abstraction. You should keep a journal and revert if a write is interrupted.
As far as I know a write below the size of PIPE_BUF is atomic. However I never rely on this. If the programs that access the file are written by you, you can use flock() to achieve exclusive access. This system call sets a lock on the file and allows other processes that know about the lock to get access or not.
In my program, I hold two files open for writing, a content-file, containing chunks of data, and an index-file, containing a map over which chunks of data has been written so far.
I would like to flush them both to disc, as performant as possible, with the only constraint that the blocks in the data-file must be written before the corresponding blocks in the map-file (naturally).
The catch is that I would like to avoid blocking I.E. doing an fsync, both for latency and throughput-reasons.
Any ideas?
I don't think you can do this easily in a single execution path. You need fsync to have the write to disk guaranteed - and this is going to have to wait for the write.
I suspect it is possible (but not easy) to do this by delegating the writing task to a separate thread or process. Generate the data in your existing program and 'write' it to the second thread/process using any method that looks sensible. This can be non-blocking. The second thread would then write any new data to the data to your content-file, then fsync, then write the index-file, then check for new data again. Key design decisions relate to how you separate the two execution paths, how you communicate between them, and if you need to report the write back to the main program. This could still have latency and throughput issues, but that's part of the cost of choosing to have the index-file and content-file in sync. At least there would be a chance of getting work done while waiting on the disk.
It could be worth looking to see if this is well encapsulated so as to be useful to you in the source of any of the transactional databases. You could also investigate the sync option when you mount the file system for the content-file.