What happens to open files which are not properly closed? - file

What happens if I do not close a file after writing to it?
Let us assume we got an too many open files error and due to that the program crashes.
Does the OS handle that for me? And if this damages the not-closed files, how do I notice that they are damaged?

From exit()
_exit() does close open file descriptors, and this may cause an unknown delay, waiting for pending output to finish.
Each return hides a system call to exit, so any unclosed descriptor is closed by the OS.

Generally speaking, if you write to a file, then your application crashes, the operating system will flush the buffers to the disk and clean up for you. The same will occur if your program exits without explicitly closing the files. This does not damage files.
The bad situation is when you write to a file and someone pulls the plug out on the computer.

Related

Can you open a directory without blocking on I/O?

I'm working on a Linux/C application with strict timing requirements. I want to open a directory for reading without blocking on I/O (i.e. succeed only if the information is immediately available in cache). If this request would block on I/O I would like to know so that I can abort and ignore this directory for now. I know that open() has a non-blocking option O_NONBLOCK. However, it has this caveat:
Note that this flag has no effect for regular files and
block devices; that is, I/O operations will (briefly)
block when device activity is required, regardless of
whether O_NONBLOCK is set.
I assume that a directory entry is treated like a regular file. I don't know of a good way to prove/disprove this. Is there a way to open a directory without any I/O blocking?
You could try using COPROC command in linux to run a process in background. Maybe it could work for you.

Check how many file descriptors are open on a file? (BSD, OSx)

How can a process interrogate the system, to see if a device (file) has been opened and left hanging?
Context: I'm having trouble setting attributes (tcsetattr) on a FTDI serial interface device, and, amongst other things, I'm wondering if there are any hanging file descriptors, preventing the process from getting the kind of lock it wants before changing attributes. If it is causing a problem, I also have a need to detect the problem and report it to the user (the code is susceptible to file-handle leaks).
It appears that the device can be opened multiple times (a shared mode) without error: I haven't advanced to the stage of trying to read or write to a device in that condition, because I haven't got past tcsetattr() yet.

Why does GDB break when writing to the network?

I get this error every time my program reaches a write() function. The program will continue again, but will stop on the next write() call. When I run this program outside of gdb, it runs properly.
Program received signal SIGPIPE, Broken pipe.
0x00007ffff794b340 in __write_nocancel () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
I've been told that this happens when the socket is closed from the remote end, but how would that be happening.
Note: The server and client are both running on the same machine, and the server was prebuilt for me, so I don't have access to it's code.
SIGPIPE is generated when the other side closed the connection. And there are good reasons for its existence.
By default gdb catches SIGPIPE.
If you aren't interested, and chances are you don't, simply disable it:
handle SIGPIPE nostop noprint pass
I've been told that this happens when the socket is closed from the remote end, but how would that be happening.
You mean why? Since you don't have the source we can only guess.
Perhaps it already sent all the data it wanted and closed the connection, because there's no point keeping it open... Remember, connections can be half-closed (that is, from one side). The server doesn't want to read any further, and just waits you to read the data and close your side. Probably nothing went wrong - but you have to decide that yourself, as only you know what the application protocol is.

Handling C Read Only File Close Errors

I'm doing some basic file reading using open, read, and close (Files are opened with access mode O_RDONLY).
When it comes time to close the file, I can't think of a good way to handle a possible file close error to make sure that the file is closed properly.
Any suggestions?
From my experience close will succeed even when it fails. There are several reasons for this.
I suspect that one of the big reasons close started to fail on some operating systems was AFS. AFS was a distributed file system from the '80s with interesting semantics - all your writes were done to a local cache and your data was written to the server when you close the file. AFS also was cryptographically authenticated with tokens that expired after a time. So you could end up in an interesting situation where all the writes you did to a file were done while your tokens were valid, but close which actually talked to the file server could be done with expired tokens meaning that all the data you wrote to the local cache was lost. This is why close needed to communicate to the user that something went wrong. Most file editors handle this correctly (emacs refuses to mark the buffer as not dirty, for example), but I've rarely seen other applications that can handle this.
That being said, close can't really fail anyway. close is implicit during exit, exec (witch close-on-exec file descriptors) and crashes that dump core. Those are situations where you can't fail. You can't have exit or a crash fail just because closing the file descriptor failed. What would you do when exit fails? Crash? What if crashing fails? Where do we run after that? Also, since almost no one checks for errors in close if failing was common you'd end up with file descriptor leaks and information leaks (what if we fail to close some file descriptor before spawning an unprivileged process?). All this is too dangerous for an operating system to allow so all operating systems I've looked at (*BSD, Linux, Solaris) close the file descriptor even if the underlying filesystem close operation has failed.
In practice this means that you just call close and ignore any errors it returns. If you have a graceful way of handling it failing like editors do you can send a message to the user and let the user resolve the problem and reopen the file, write down the data and try to close again. Don't do anything automatically in a loop or otherwise. Errors in close are beyond your control in an application.
I guess the best thing to do is to retry a couple of times, with some short delay between attempts, then log the problem and move on.
The manual page for close() mentions EINTR as being a possible error, which is why re-trying can help.
If your program is about to exit anyway, I wouldn't worry too much about this type of error checking, since any resources you've allocated are going to be de-allocated by the operating system anyway (on most/typical desktop/server platforms, that is).

How does kernel know file is closed

What exactly is the mechanism by which the linux knows that a file has been closed ?
I know commands such as INOTIFY would trigger IN_CLOSE_WRITE event when a file is closed. But how does it work ? What triggers close of a file ?
Similarly how does OS know that a file has been opened and where does it register that fact?
The OS (i.e. kernel) is the one that actually opens and closes files. A program will have to tell the OS to open/close files on its behalf every time it wants to do so via system calls. The OS can simply keep track of these calls that go through itself.
There is an open file table that lists all the streams that are open and where they point to in memory.
This may help: http://www.cs.kent.edu/~walker/classes/os.f07/lectures/Walker-11.pdf

Resources