garbage collection for `fopen()`? - c

Boehm gc only deal with memory allocation. But if one wants to use garbage collection to deal with fopen() so that fclose() is no longer needed. Is there a way to do so in C?
P.S.
For example, PyPy takes the garbage collection approach to deal with opening files.
The most obvious effect of this is that files (and sockets, etc) are not promptly closed when they go out of scope. For files that are opened for writing, data can be left sitting in their output buffers for a while, making the on-disk file appear empty or truncated.
http://doc.pypy.org/en/latest/cpython_differences.html

In case it's not obvious, nothing Boehm GC does is possible in C. The whole library is a huge heap of undefined behavior that kinda happens to work on some (many?) real-world implementations. The more advanced, especially in the area of safety, C implementations get, the less likely any of it is to continue to work.
With that said, I don't see any reason the same principle couldn't be extended to FILE* handles. The problem, however, is that with it necessarily being a conservative GC, false positives for remaining references would prevent the file from being closed, and that has visible consequences on the state of the process and the filesystem. If you explicitly fflush in the right places, it might be acceptably only-half-broken, though.
There's absolutely no meaningful way to do this with file descriptors, on the other hand, because they are small integers. You'll essentially always have false positives for remaining references.

TL;DR: Yes, but. More but than yes.
First things first. Since the standard C library must itself automatically garbage collect open file handles in the exit() function (see standard quotes below), it is not necessary to ever call fclose as long as:
You are absolutely certain that your program will eventually terminate either by returning from main() or by calling exit().
You don't care how much time elapses before the file is closed (making data written to the file available to other processes).
You don't need to be informed if the close operation failed (perhaps because of disk failure).
Your process will not open more than FOPEN_MAX files, and will not attempt to open the same file twice. (FOPEN_MAX must be at least eight, but that includes the three standard streams.)
Of course, aside from very simple toy applications, those guarantees are pretty restrictive, particularly for files opened for writing. For a start, how are you going to guarantee that the host does not crash or get powered down (voiding condition 1)? So most programmers regard it as very bad style to not close all open files.
All the same, it is possible to imagine an application which only opens files for reading. In that case, the most serious issue with never calling fclose will be the last one, the simultaneous open file limit. Five is a pretty small number, and even though most systems have much higher limits, they almost all have limits; if an application runs long enough, it will inevitably open too many files. (Condition 3 might be a problem, too, although not all operating systems impose this limit, and few systems impose the limit on files opened only for reading.)
As it happens, these are precisely the issues that garbage collection can, in theory, help solve. With a bit of work, it is possible to get a garbage collector to help manage the number of simultaneously open files. But... as mentioned, there are a number of Buts. Here's a few:
The standard library is under no obligation to dynamically allocate FILE objects using malloc, or indeed to dynamically allocate them at all. (A library which only allowed eight open files might have an internal statically allocated array of eight FILE structures, for example.) So the garbage collector might never see the storage allocations. In order to involve the garbage collector in the removal of FILE objects, every FILE* needs to be wrapped inside a dynamically-allocated proxy (a "handle"), and every interface which takes or returns FILE* pointers must be wrapped with one which creates a proxy. That's not too much work, but there are a lot of interfaces to wrap and the use of the wrappers basically relies on source modification; you might find it difficult to introduce FILE* proxies if some files are opened by external library functions.
Although the garbage collector can be told what to do before it deletes certain objects (see below), most garbage collector libraries have no interface which provides for an object creation limit other than the availability of memory. The garbage collector can only solve the "too many open files" problem if it knows how many files are allowed to be open simultaneously, but it doesn't know and it doesn't have a way for you tell it. So you have to arrange for the garbage collector to be called manually when this limit is about to be breached. Of course, since you are already wrapping all calls to fopen, as per point 1, you can add this logic to your wrapper, either by tracking the open file count, or by reacting to an error indication from fopen(). (The C standard doesn't specify a portable mechanism for detecting this particular error, but Posix says that fopen should fail and set errno to EMFILE if the process has too many files open. Posix also defines the ENFILE error value for the case where there are too many files open in total over all processes; it's probably worthwhile to consider both of these cases.)
In addition, the garbage collector doesn't have a mechanism to limit garbage collection to a single resource type. (It would be very difficult to implement this in a mark-sweep garbage collector, such as the BDW collector, because all used memory needs to be scanned to find live pointers.) So triggering garbage collection whenever all file descriptor slots are used up could turn out to be quite expensive.
Finally, the garbage collector does not guarantee that garbage will be collected in a timely manner. If there is no resource pressure, the garbage collector could stay dormant for a long time, and if you are relying on the garbage collector to close your files, that means that the files could remain open for an unlimited amount of time even though they are no longer in use. So the first two conditions in the original list of requirements for omitting fclose() continue to be in force, even with a garbage collector.
So. Yes, but, but, but, but. Here's what the Boehm GC documentation recommends (abbreviated):
Actions that must be executed promptly… should be handled by explicit calls in the code.
Scarce system resources should be managed explicitly whenever convenient. Use [garbage collection] only as a backup mechanism for the cases that would be hard to handle explicitly.
If scarce resources are managed with [the garbage collector], the allocation routine for that resource (e.g. open file handles) should force a garbage collection (two if that doesn't suffice) if it finds itself short of the resource.
If extremely scarce resources are managed (e.g. file descriptors on systems which have a limit of 20 open files), it may be necessary to introduce a descriptor caching scheme to hide the resource limit.
Now, suppose you've read all of that, and you still want to do it. It's actually pretty simple. As mentioned above, you need to define a proxy object, or handle, which holds a FILE*. (If you are using Posix interfaces like open() which use file descriptors -- small integers -- instead of FILE structures, then the handle holds the fd. This is a different object type, obviously, but the mechanism is identical.)
In your wrapper for fopen() (or open(), or any of the other calls which return open FILE*s or files), you dynamically allocate a handle, and then (in the case of the Boehm GC) call GC_register_finalizer to tell the garbage collector what function to call when the resource is about to be deleted. Almost all GC libraries have some such facility; search for finalizer in their documentation. Here's the documentation for the Boehm collector, out of which I extracted the list of warnings above.
Watch out to avoid race conditions when you are wrapping the open call. The recommended practice is as follows:
Dynamically allocate the handle.
Initialize its contents to a sentinel value (such as -1 or NULL) which indicates that the handle has not yet been assigned to an open file.
Register a finalizer for the handle. The finalizer function should check for the sentinel value before attempting to call fclose(), so registering the handle at this point is fine.
Open the file (or other such resource).
If the open succeeds, reset the handle to use the returned from the open. If the failure has to do with resource exhaustion, trigger a manual garbage collection and repeat as necessary. (Be careful to limit the number of times you do that for a single open wrapper. Sometimes you need to do it twice, but three consecutive failures probably indicates some other kind of problem.)
If the open eventually succeeded, return the handle. Otherwise, optionally deregister the finalizer (if your GC library allows that) and return an error indication.
Obligatory C standard quotes
Returning from main() is the same as calling exit()
§5.1.2.2.3 (Program termination): (Only applies to hosted implementations)
If the return type of the main function is a type compatible with int, a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0.
Calling exit() flushes all file buffers and closes all open files
§7.22.4.4 (The exit function):
Next, all open streams with unwritten buffered data are flushed, all open streams are closed, and all files created by the tmpfile function are removed…

Related

CloseHandle necessary when using HeapDestroy?

I allocated an array of HANDLE on an Heap and then each handle is associated with a thread.
Once I'm finished with the work, do I have to call CloseHandle() on each of them before calling HeapDestroy()? Or does the latter call make the first useless?
Always close a handle once you've finished with it - it is good practice. The Windows Kernel has tables which tracks assigned handles and who they are assigned to, so it will be in your best interest to remember to close them.
Handle leaks is also a thing which exist and it is when a caller requests for a handle but doesn't close it, and they pile up over a duration of time.
You can also occasionally cause other problems by not closing handles (e.g. sharing violations if you opened a handle to a file and denied sharing but you've kept the handle open when you no longer need the open handle).
To be precise though, handles are fake - the Windows Kernel translates them because it relies on an internal, undocumented and non-exported table which stores the real pointer address to a kernel object linked to that fake handle.
Yes, certainly you must first close the handles! Windows does not know (or care) what data you have stored in your heap, so it cannot close the handles automatically.

Can I adapt a function that writes to disk to write to memory

I have third-party library with a function that does some computation on the specified data, and writes the results to a file specified by file name:
int manipulateAndWrite(const char *filename,
const FOO_DATA *data);
I cannot change this function, or reimplement the computation in my own function, because I do not have the source.
To get the results, I currently need to read them from the file. I would prefer to avoid the write to and read from the file, and obtain the results into a memory buffer instead.
Can I pass a filepath that indicates writing to memory instead of a
filesystem?
Yes, you have several options, although only the first suggestion below is supported by POSIX. The rest of them are OS-specific, and may not be portable across all POSIX systems, although I do believe they work on all POSIXy systems.
You can use a named pipe (FIFO), and have a helper thread read from it concurrently to the writer function.
Because there is no file per se, the overhead is just the syscalls (write and read); basically just the overhead of interprocess communication, nothing to worry about. To conserve resources, do create the helper thread with a small stack (using pthread_attr_ etc.), as the default stack size tends to be huge (on the order of several megabytes; 2*PTHREAD_STACK_SIZE should be plenty for helper threads.)
You should ensure the named pipe is in a safe directory, accessible only to the user running the process, for example.
In many POSIXy systems, you can create a pipe or a socket pair, and access it via /dev/fd/N, where N is the descriptor number in decimal. (In Linux, /proc/self/fd/N also works.) This is not mandated by POSIX, so may not be available on all systems, but most do support it.
This way, there is no actual file per se, and the function writes to the pipe or socket. If the data written by the function is at most PIPE_BUF bytes, you can simply read the data from the pipe afterwards; otherwise, you do need to create a helper thread to read from the pipe or socket concurrently to the function, or the write will block.
In this case, too, the overhead is minimal.
On ELF-based POSIXy systems (basically all), you can interpose the open(), write(), and close() syscalls or C library functions.
(In Linux, there are two basic approaches, one using the linker --wrap, and one using dlsym(). Both work fine for this particular case. This ability to interpose functions is based on how ELF binaries are linked at run time, and is not directly related to POSIX.)
You first set up the interposing functions, so that open() detects if the filename matches your special "in-memory" file, and returns a dedicated descriptor number for it. (You may also need to interpose other functions, like ftruncate() or lseek(), depending on what the function actually does; in Linux, you can run a binary under ptrace to examine what syscalls it actually uses.)
When write() is called with the dedicated descriptor number, you simply memcpy() it to a memory buffer. You'll need to use global variables to describe the allocated size, size used, and the pointer to the memory buffer, and probably be prepared to resize/grow the buffer if necessary.
When close() is called with the dedicated descriptor number, you know the memory buffer is complete, and the contents ready for processing.
You can use a temporary file on a RAM filesystem. While the data is technically written to a file and read back from it, the operations involve RAM only.
You should arrange for a default path to one to be set at compile time, and for individual users to be able to override that for their personal needs, for example via an environment variable (YOURAPP_TMPDIR?).
There is no need for the application to try and look for a RAM-based filesystem: choices like this are, and should be, up to the user. The application should not even care what kind of filesystem the file is on, and should just use the specified directory.
You could not use that library function. Take a look at this on how to write to in-memory files:
Is it possible to create a C FILE object to read/write in memory

Is there a better way to manage file pointer in C?

Is it better to use fopen() and fclose() at the beginning and end of every function that use that file, or is it better to pass the file pointer to every of these function ? Or even to set the file pointer as an element of the struct the file is related to.
I have two projects going on and each one use one method (because I thought about passing the file pointer after I began the first one).
When I say better, I mean in term of speed and/or readability. What's best practice ?
Thank you !
It depends. You certainly should document what function is fopen(3)-ing a FILE handle and what function is expecting to fclose(3) it.
You might put the FILE* in a struct but you should have a convention about who and when should the file be read and/or written and closed.
Be aware that opened files are some expansive resources in a process (=your running program). BTW, it is also operating system and file system specific. And FILE handles are buffered, see fflush(3) & setvbuf(3)
On small systems, the maximal number of fopen-ed files handles could be as small as a few dozens. On a current Linux desktop, a process could have a few thousand opened file descriptors (which the internal FILE is keeping, with its buffers). In any case, it is a rather precious and scare resource (on Linux, you might limit it with setrlimit(2))
Be aware that disk IO is very slow w.r.t. CPU.

Caching file pointers in C

I need to cache file pointers in my program, but the problem is that I may have multiple threads accessing that file pointer cache. For example, if thread1 asks for a file pointer, and a cache miss occurs, fopen is called and the pointer is cached. Now when thread 2 arrives and cache hit occurs, both the files share the read/write pointer leading to errors. Some things I thought of -
I could keep track of when the file is in use, but currently I don't know when it will be released, and including this feature disturbs my design
I could send a duplicate of the file pointer in case of a hit, but I don't know any way of doing this so that these two copies do not share read/write locations
How should I proceed?
Are you concerned about optimizing out the file open operation? I think you are making it way more complex and error prone than what it should be. File pointers (FILE*) are not thread-safe structures so you cannot share them across threads.
What you probably need to do (if you really want to cache the file open operations) is to keep a dictionary mapping filename to a file descriptor (an int) and have a thread-safe function to return a descriptor by name or open if it's not in the dictionary.
And of course doing I/O to the same file descriptor from multiple threads needs to be regulated as well.

Atomically write 64kB

I need to write something like 64 kB of data atomically in the middle of an existing file. That is all, or nothing should be written. How to achieve that in Linux/C?
I don't think it's possible, or at least there's not any interface that guarantees as part of its contract that the write would be atomic. In other words, if there is a way that's atomic right now, that's an implementation detail, and it's not safe to rely on it remaining that way. You probably need to find another solution to your problem.
If however you only have one writing process, and your goal is that other processes either see the full write or no write at all, you can just make the changes in a temporary copy of the file and then use rename to atomically replace it. Any reader that already had a file descriptor open to the old file will see the old contents; any reader opening it newly by name will see the new contents. Partial updates will never be seen by any reader.
There are a few approaches to modify file contents "atomically". While technically the modification itself is never truly atomic, there are ways to make it seem atomic to all other processes.
My favourite method in Linux is to take a write lease using fcntl(fd, F_SETLEASE, F_WRLCK). It will only succeed if fd is the only open descriptor to the file; that is, nobody else (not even this process) has the file open. Also, the file must be owned by the user running the process, or the process must run as root, or the process must have the CAP_LEASE capability, for the kernel to grant the lease.
When successful, the lease owner process gets a signal (SIGIO by default) whenever another process is opening or truncating the file. The opener will be blocked by the kernel for up to /proc/sys/fs/lease-break-time seconds (45 by default), or until the lease owner releases or downgrades the lease or closes the file, whichever is shorter. Thus, the lease owner has dozens of seconds to complete the "atomic" operation, without any other process being able to see the file contents.
There are a couple of wrinkles one needs to be aware of. One is the privileges or ownership required for the kernel to allow the lease. Another is the fact that the other party opening or truncating the file will only be delayed; the lease owner cannot replace (hardlink or rename) the file. (Well, it can, but the opener will always open the original file.) Also, renaming, hardlinking, and unlinking/deleting the file does not affect the file contents, and therefore are not affected at all by file leases.
Remember also that you need to handle the signal generated. You can use fcntl(fd, F_SETSIG, signum) to change the signal. I personally use a trivial signal handler -- one with an empty body -- to catch the signal, but there are other ways too.
A portable method to achieve semi-atomicity is to use a memory map using mmap(). The idea is to use memmove() or similar to replace the contents as quickly as possible, then use msync() to flush the changes to the actual storage medium.
If the memory map offset in the file is a multiple of the page size, the mapped pages reflect the page cache. That is, any other process reading the file, in any way -- mmap() or read() or their derivatives -- will immediately see the changes made by the memmove(). The msync() is only needed to make sure the changes are also stored on disk, in case of a system crash -- it is basically equivalent to fsync().
To avoid preemption (kernel interrupting the action due to the current timeslice being up) and page faults, I'd first read the mapped data to make sure the pages are in memory, and then call sched_yield(), before the memmove(). Reading the mapped data should fault the pages into page cache, and sched_yield() releases the rest of the timeslice, making it extremely likely that the memmove() is not interrupted by the kernel in any way. (If you do not make sure the pages are already faulted in, the kernel will likely interrupt the memmove() for each page separately. You won't see that in the process, but other processes see the modifications to occur in page-sized chunks.)
This is not exactly atomic, but it is practical: it does not give you any guarantees, only makes the race window very very short; therefore I call this semi-atomic.
Note that this method is compatible with file leases. One could try to take a write lease on the file, but fall back to leaseless memory mapping if the lease is not granted within some acceptable time period, say a second or two. I'd use timer_create() and timer_settime() to create the timeout timer, and the same empty-body signal handler to catch the SIGALRM signal; that way the fcntl() is interrupted (returns -1 with errno == EINTR) when the timeout occurs -- with the timer interval set to some small value (say 25000000 nanoseconds, or 0.025 seconds) so it repeats very often after that, interrupting syscalls if the initial interrupt is missed for any reason.
Most userspace applications create a copy of the original file, modify the contents of the copy, then replace the original file with the copy.
Each process that opens the file will only see complete changes, never a mix of old and new contents. However, anyone keeping the file open, will only see their original contents, and not be aware of any changes (unless they check themselves). Most text editors do check, but daemons and other processes do not bother.
Remember that in Linux, the file name and its contents are two separate things. You can open a file, unlink/remove it, and still keep reading and modifying the contents for as long as you have the file open.
There are other approaches, too. I do not want to suggest any specific approach, because the optimal one depends heavily on the circumstances: Do the other processes keep the file open, or do they always (re)open it before reading the contents? Is atomicity preferred or absolutely required? Is the data plain text, structured like XML, or binary?
EDITED TO ADD:
Please note that there are no ways to guarantee beforehand that the file will be successfully modified atomically. Not in theory, and not in practice.
You might encounter a write error with the disk full, for example. Or the drive might hiccup at just the wrong moment. I'm only listing three practical ways to make it seem atomic in typical use cases.
The reason write leases are my favourite is that I can always use fcntl(fd,F_GETLEASE,&ptr) to check whether the lease is still valid or not. If not, then the write was not atomic.
High system load is unlikely to cause the lease to be broken for a 64k write, if the same data has been read just prior (so that it will likely be in page cache). If the process has superuser privileges, you can use setpriority(PRIO_PROCESS,getpid(),-20) to temporarily raise the process priority to maximum while taking the file lease and modifying the file. If the data to be overwritten has just been read, it is extremely unlikely to be moved to swap; thus swapping should not occur, either.
In other words, while it is quite possible for the lease method to fail, in practice it is almost always successful -- even without the extra tricks mentioned in this addendum.
Personally, I simply check if the modification was not atomic, using the fcntl() call after the modification, prior to msync()/fsync() (making sure the data hits the disk in case a power outage occurs); that gives me an absolutely reliable, trivial method to check whether the modification was atomic or not.
For configuration files and other sensitive data, I too recommend the rename method. (Actually, I prefer the hardlink approach used for NFS-safe file locking, which amounts to the same thing but uses a temporary name to detect naming races.) However, it has the problem that any process keeping the file open will have to check and reopen the file, voluntarily, to see the changed contents.
Disk writes cannot be atomic without a layer of abstraction. You should keep a journal and revert if a write is interrupted.
As far as I know a write below the size of PIPE_BUF is atomic. However I never rely on this. If the programs that access the file are written by you, you can use flock() to achieve exclusive access. This system call sets a lock on the file and allows other processes that know about the lock to get access or not.

Resources