What happens if I don't call fclose() in a C program? - c

Firstly, I'm aware that opening a file with fopen() and not closing it is horribly irresponsible, and bad form. This is just sheer curiosity, so please humour me :)
I know that if a C program opens a bunch of files and never closes any of them, eventually fopen() will start failing. Are there any other side effects that could cause problems outside the code itself? For instance, if I have a program that opens one file, and then exits without closing it, could that cause a problem for the person running the program? Would such a program leak anything (memory, file handles)? Could there be problems accessing that file again once the program had finished? What would happen if the program was run many times in succession?

As long as your program is running, if you keep opening files without closing them, the most likely result is that you will run out of file descriptors/handles available for your process, and attempting to open more files will fail eventually. On Windows, this can also prevent other processes from opening or deleting the files you have open, since by default, files are opened in an exclusive sharing mode that prevents other processes from opening them.
Once your program exits, the operating system will clean up after you. It will close any files you left open when it terminates your process, and perform any other cleanup that is necessary (e.g. if a file was marked delete-on-close, it will delete the file then; note that that sort of thing is platform-specific).
However, another issue to be careful of is buffered data. Most file streams buffer data in memory before writing it out to disk. If you're using FILE* streams from the stdio library, then there are two possibilities:
Your program exited normally, either by calling the exit(3) function, or by returning from main (which implicitly calls exit(3)).
Your program exited abnormally; this can be via calling abort(3) or _Exit(3), dying from a signal/exception, etc.
If your program exited normally, the C runtime will take care of flushing any buffered streams that were open. So, if you had buffered data written to a FILE* that wasn't flushed, it will be flushed on normal exit.
Conversely, if your program exited abnormally, any buffered data will not be flushed. The OS just says "oh dear me, you left a file descriptor open, I better close that for you" when the process terminates; it has no idea there's some random data lying somewhere in memory that the program intended to write to disk but did not. So be careful about that.

The C standard says that calling exit (or, equivalently, returning from main) causes all open FILE objects to be closed as-if by fclose. So this is perfectly fine, except that you forfeit the opportunity to detect write errors.
EDIT: There is no such guarantee for abnormal termination (abort, a failed assert, receipt of a signal whose default behavior is to abnormally terminate the program -- note that there aren't necessarily any such signals -- and other implementation-defined means). As others have said, modern operating systems will clean up all externally visible resources, such as open OS-level file handles, regardless; however, FILEs are likely not to be flushed in that case.
There certainly have been OSes that did not clean up externally visible resources on abnormal termination; it tends to go along with not enforcing hard privilege boundaries between "kernel" and "user" code and/or between distinct user space "processes", simply because if you don't have those boundaries it may not be possible to do so safely in all cases. (Consider, for instance, what happens if you write garbage over the open-file table in MS-DOS, as you are perfectly able to do.)

Assuming you exit under control, using the exit() system call or returning from main(), then the open file streams are closed after flushing. The C Standard (and POSIX) mandate this.
If you exit out of control (core dump, SIGKILL) etc, or if you use _exit() or _Exit(), then the open file streams are not flushed (but the file descriptors end up closed, assuming a POSIX-like system with file descriptors - Standard C does not mandate file descriptors). Note that _Exit() is mandated by the C99 standard, but _exit() is mandated by POSIX (but they behave the same on POSIX systems). Note that file descriptors are separate from file streams. See the discussion of 'Consequences of Program Termination' on the POSIX page for _exit() to see what happens when a program terminates under Unix.

When the process dies, most modern operating systems (the kernel specifically) will free all of your handles and allocated memory.

Related

What's the point of fclose()? [duplicate]

This question already has answers here:
What happens if I don't call fclose() in a C program?
(4 answers)
Closed 7 years ago.
From what I read, fclose() is basically like the free() when memories been allocated but I also read that the operating system will close that file for you and flush away any streams that were open right after it terminates. I've even tested a few programs without fclose() and they all seem to work fine.
A long-running process (ex. a database or web browser) may need to open many files during its lifetime; keeping unused files open wastes resources, and potentially locks other processes out of using the files.
Additionally, fclose flushes the user-space buffer that is frequently used when writing to files to improve performance; if the process exits without flushing that buffer (with fflush/fclose), the data still in the buffer will be lost.
Most modern OSes will also reclaim the memory you malloc()'ed, but using free() when appropriate is still good practice. The point is that once you no longer need a resource you should relinquish it, so the system can repurpose whatever backing resources were reclaimed for use by other applications (typically, memory). Also there are limits on the number of file descriptors you can keep open at the same time.
Apart from that there are further considerations in the case of open() and friends, specifically by default open file descriptors are inherited accross thread and fork()'ed process boundaries. This means that if you fail to close() file descriptors, you may find that a child process can access files opened by the parent process. This is typically undesirable, it's a trivial security hole if you want a privileged parent process to spawn a slave process with lesser privileges.
Additionally, the semantics of unlink() and friends are that the file contents are only 'deleted' once the last open file descriptor to the file is close()'d so again: if you keep files open for longer than strictly necessary you cause suboptimal behaviour in the overall system.
Finally, in the case of sockets a close() also corresponds to disconnecting from the remote peer.

Why do we need to close a file in C? [duplicate]

This question already has answers here:
What happens if I don't call fclose() in a C program?
(4 answers)
Closed 8 years ago.
Suppose that we have opened a file using fopen() in C and we unintentionally forget to close it using fclose() then what could be the consequences of it? Also what are the solutions to it if we are not provided with the source code but only executable?
The consequences are that a file descriptor is "leaked". The operating system uses some descriptor, and has some resources associated with that open file. If you fopen and don't close, then that descriptor won't be cleaned up, and will persist until the program closes.
This problem is compounded if the file can potentially be opened multiple times. As the program runs more and more descriptors will be leaked, until eventually the operating system either refuses or is unable to create another descriptor, in which case the call to fopen fails.
If you are only provided with the executable, not the source code, your options are very limited. At that point you'd have to either try decompiling or rewriting the assembly by hand, neither of which are attractive options.
The correct thing to do is file a bug report, and then get an updated/fixed version.
If there are a lot of files open but not closed properly, the program will eventually run out of file handles and/or memory space and crash.
Suggest you engage your developer to update their code.
The consequences is implementation dependent based on the fclose / fopen and associated functions -- they are buffered input/output functions. So things write are written to a "file" is in fact first written to an internal buffer -- the buffer is only flushed to output when the code "feels like it" -- that could be every line, every write of every full block depending on the smartness of the implementation.
The fopen will most likely use open to get an actual file descriptor to the operating system -- on most systems (Linux, Windows etc) the os file descriptor will be closed by the OS when the process terminates -- however if the program does not terminates, the os file descriptor will leak and you will eventually run out of file descriptors and die.
Some standard may mandate a specific behavior when the program terminates either cleanly or through a crash, but the fact is that you cannot reply in this as not all implementations may follow this.
So your risk is that you will loose some of the data which you program believed that it had written -- that would be the data which was sitting in the internal buffer but never flushed -- or you may run out of file descriptors and die.
So, fix the code.

how to reboot a Linux system when a fatal error occurs (C programming)

I am writing a C program for an embedded Linux (debian-arm) device. In some cases, e.g. if a fatal error occurs on the system/program, I want the program to reboot the system by system("reboot");after logging the error(s) via syslog(). My program includes multithreads, UDP sockets, severalfwrite()/fopen(), malloc() calls, ..
I would like to ask a few question what (how) the program should perform processes just before rebooting the system apart from the syslog. I would appreciate to know how these things are done by the experienced programmers.
Is it necessary to close the open sockets (UDP) and threads just before rebooting? If it is the case, is there a function/system call that closes the all open sockets and threads? If the threads needs to be closed and there is no such global function/call to end them, how I suppose to execute pthread_exit(NULL); for each specific threads? Do I need go use something like goto to end the each threads?
How should the program closes files that fopen and fwrite uses? Is there a global call to close the files in use or do I need to find out the files in use manually then use fclose for the each file? I see see some examples on the forums fflush(), flush(), sync(),.. are used, which one(s) would you recommend to use? In a generic case, would it cause any problem if all of these functions are used (although these could be used unnecessary)?
It is not necessary to free the variables that malloc allocated space, is it?
Do you suggest any other tasks to be performed?
The system automatically issues SIGTERM signals to all processes as one of the steps in rebooting. As long as you correctly handle SIGTERM, you need not do anything special after invoking the reboot command. The normal idiom for "correctly handling SIGTERM" is:
Create a pipe to yourself.
The signal handler for SIGTERM writes one byte (any value will do) to that pipe.
Your main select loop includes the read end of that pipe in the set of file descriptors of interest. If that pipe ever becomes readable, it's time to exit.
Furthermore, when a process exits, the kernel automatically closes all its open file descriptors, terminates all of its threads, and deallocates all of its memory. And if you exit cleanly, i.e. by returning from main or calling exit, all stdio FILEs that are still open are automatically flushed and closed. Therefore, you probably don't have to do very much cleanup on the way out -- the most important thing is to make sure you finish generating any output files and remove any temporary files.
You may find the concept of crash-only software useful in figuring out what does and does not need cleaning up.
The only cleanup you need to do is anything your program needs to start up in a consistent state. For example, if you collect some data internally then write it to a file, you will need to ensure this is done before exiting. Other than that, you do not need to close sockets, close files, or free all memory. The operating system is designed to release these resources on process exit.

What happens to a process when the filesystem is full

What happens to a process if the filesystem is full? Does the kernel send us a signal to shutdown and if so what signal is it. Obviously, a program will probably crash if it writes to the file system but I'm curious as to how this occurs (in gory kernel/operating system detail).
What happens to a process if the filesystem fills up?
Operations that would require additional disk space on the full partition (like creating or appending to a file) fail with an errno of ENOSPC.
No signal is sent, as a full filesystem is not a critical condition which makes a signal necessary. It's a routine, easily handled error.
There is no reason a program should crash when the filesystem is full. Obviously file writes will fail, but a well-written program should be able to cope with that - in C, this would mean that fopen returns NULL or ferror returns a non-zero value, etc. I have encountered this many times, and some nasty things can happen such as overwriting a file with a blank version, but never a program crash. If it does happen, it is presumably because the author of the program tried to use a NULL file descriptor or some similar problem, in which case the program would receive a SIGSEGV as usual.

What happens if you exit a program without doing fclose()?

Question:
What happens if I exit program without closing files?
Are there some bad things happening (e.g. some OS level file descriptor array is not freed up..?)
And to the answer the same in both cases
programmed exiting
unexpected crash
Code examples:
With programmed exiting I mean something like this:
int main(){
fopen("foo.txt","r");
exit(1);
}
With unexpected crash I mean something like this:
int main(){
int * ptr=NULL;
fopen("foo.txt","r");
ptr[0]=0; // causes segmentation fault to occur
}
P.S.
If the answer is programming language dependent then I would like to know about C and C++.
If the answer is OS dependent then I am interested in Linux and Windows behaviour.
It depends on how you exit. Under controlled circumstances (via exit() or a return from main()), the data in the (output) buffers will be flushed and the files closed in an orderly manner. Other resources that the process had will also be released.
If your program crashes out of control, or if it calls one of the alternative _exit() or _Exit() functions, then the system will still clean up (close) open file descriptors and release other resources, but the buffers won't be flushed, etc.
The OS tidies up for you. It is like going around to a friends - it is polite to shut the bathroom door and not have them do it for you.
All handles that belong to your process will be cleaned up. However any "named" kernel objects like named pipes and others will stick around.

Resources