Can a failed fopen impact the filesystem? - c

If fopen( path, "w" ) succeeds, then the file will be truncated. If the fopen fails, are there an guarantees that the file is not modified?

No there are no guarantees about the state of a file if fopen(path, "w") fails. The failure could be coming from any operation from opening the file, committing the truncation to disk, etc ... The only guarantee a failure provides is that you don't have access to the file.

The only reason why fopen() would fail would be if the file is somehow inaccessible or cannot be modified. If you are worried, though, about the file being modified, you could instead use the open() command with the flag O_WRITE. You could then convert this to a FILE* pointer by using fdopen().

Excellent question, and I think the answer is no. fopen has to allocate a FILE structure, and the natural order of operations when implementing it would be to open the file first, then attempt allocating the FILE. This way, fopen is just a wrapper around fdopen (or a similar function with some leading underscores or whatnot for namespace conformance).
Personally I would not use stdio functions at all when you care about the state of your files after any failure. Even once you have the file open, stdio's buffering makes it almost impossible to know where an error occurred if a write function ever returns failure, and even more impossible to return your file to a usable, consistent state.

Related

Are there any circumstances in which an IO function can never fail?

Most IO functions (fopen, fread, etc) can fail if the system encounters an error, or if a passed argument is invalid (fseek to some value other than SEEK_SET, SEEK_CUR, or SEEK_END), but they can also fail for reasons beyond the programmers control. Checking for success is tedious and might sometimes unnecessary. Are there any circumstances in which a function will not fail even if it could fail in certain circumstances? Such as fcloseing a file opened in read only mode, and the file pointer was freshly returned by fopen? When do I not have to worry about an IO function failing? Are these such circumstances?
File was opened in READ mode
File pointer was returned by fopen and nullchecked
Read operations
What other circumstances?
I do not see why fclose would fail on a file opened in READ mode, because there is not going to be any buffers written to the file, all that needs to happen is freeing up memory, and free cannot fail so neither should fclose, is this assumption correct?
Also should I avoid rewind because it cannot indicate error?

confused about using ftell() to check if the file is empty

I want to add a structure to a binary file but first i need to check whether the file has previous data stored in it, and if not i can add the structure,otherwise ill have to read all the stored data and stick the structure in its correct place, but i got confused about how to check if the file is empty ,i thought about trying something like this:
size = 0
if(fp!=NULL)
{
fseek (fp, 0, SEEK_END);
size = ftell (fp);
rewind(fp);
}
if (size==0)
{
// print your error message here
}
but if the file is empty or still not created how can the file pointer not be NULL ? whats the point of using ftell() if i can simply do something like this :
if(fp==NULL){fp=fopen("data.bin","wb");
fwrite(&struct,sizeof(struct),1,ptf);
fclose(fp);}
i know that NULL can be returned in other cases such as protected files but still i cant understand how using ftell() is effective when file pointers will always return NULL if the file is empty.any help will be appreciated :)
i need to check whether the file has previous data stored in it
There might be no portable and robust way to do that (that file might change during the check, because other processes are using it). For example, on Unix or Linux, that file might be opened by another process writing into it while your own program is running (and that might even happen between your ftell and your rewind). And your program might be running in several processes.
You could use operating system specific functions. For POSIX (including Linux and many Unixes like MacOSX or Android), you might use stat(2) to query the file status (including its size with st_size). But after that, some other process might still write data into that file.
You might consider advisory locking, e.g. with flock(2), but then you adopt the system-wide convention that every program using that file would lock it.
You could use some database with ACID properties. Look into sqlite or into RDBMS systems like PostGreSQL or MariaDB. Or indexed file library like gdbm.
You can continue coding with the implicit assumption (but be aware of it) that only your program is using that file, and that your program has at most one process running it.
if the file is empty [...] how can the file pointer not be NULL ?
As Increasingly Idiotic answered, fopen can fail, but usually don't fail on empty files. Of course, you need to handle fopen failure (see also this). So most of the time, your fp would be valid, and your code chunk (assuming no other process is changing that file simulateously) using ftell and rewind is an approximate way to check that the file is empty. BTW, if you read (e.g. with fread or fgetc) something from that file, that read would fail if your file was empty, so you probably don't need to check its emptiness before.
A POSIX specific way to query the status (including size) of some fopen-ed file is to use fileno(3) and fstat(2) together like fstat(fileno(fp), &mystat) after having declared struct stat mystat;
fopen() does not return NULL for empty files.
From the documentation:
If successful, returns a pointer to the object that controls the opened file stream ... On error, returns a null pointer.
NULL is returned only when the file could not be opened. The file could fail to open due to any number of reasons such as:
The file doesn't exist
You don't have permissions to read the file
The file cannot be opened multiple times simultaneously.
More possible reasons in this SO answer
In your case, if fp == NULL you'll need to figure out why fopen failed and handle each case accordingly. In most cases, an empty file will open just fine and return a non NULL file pointer.

ftruncate on file opened with fopen

Platform is Ubuntu Linux on ARM.
I want to write a string to a file, but I want every time to truncate the file and then write the string, i.e. no append.
I have this code:
f=fopen("/home/user1/refresh.txt","w");
fputs( "{"some string",f);
fflush(f);
ftruncate(fileno(f),(off_t)0);
flcose(f);
If I run it and then check the file, it will be of zero length and when opened, there will be nothing in it.
If I remove the fflush call, it will NOT be 0 (will be 11) and when I open it there will be "some string" in it.
Is this the normal behavior?
I do not have a problem calling fflush, but I want to do this in a loop and calling fflush may increase the execution time considerably.
You should not really mix file handle and file descriptor calls like that.
What's almost certainly happening without the fflush is that the some string is waiting in file handle buffers for delivery to the file descriptor. You then truncate the file descriptor and fclose the file handle, flushing the string, hence it shows up in the file.
With the fflush, some string is sent to the file descriptor and then you truncate it. With no further flushing, the file stays truncated.
If you want to literally "truncate the file then write", then it's sufficient to:
f=fopen("/home/user1/refresh.txt","w");
fputs("some string",f);
fclose(f);
Opening the file in the mode w will truncate it (as opposed to mode a which is for appending to the end).
Also calling fclose will flush the output buffer so no data gets lost.
POSIX requires you to take specific actions (which ensure that no ugly side effects of buffering make your program go haywire) when switching between using a FILE stream and a file descriptor to access the same open file. This is described in XSH 2.5.1 Interaction of File Descriptors and Standard I/O Streams.
In your case, I believe it should suffice to just call fflush before ftruncate, like you're doing. Omitting this step, per the rules of 2.5.1, results in undefined behavior.

How to open a file for reading in C?

There is this thing that gives me headaches in C programming when I deal with reading from files.
I do not understand the difference between these 2 methods:
FILE *fd;
fd=fopen(name,"r"); // "r" for reading from file, "w" for writing to file
//"a" to edit the file
fd returns NULL if the file can't be open, right?
The second method that i use is:
int fd;
fd=open(name,O_RDONLY);
fd would be -1 if an error occurs at opening the file.
Would anyone be kind enough to explain this to me?
Thanks in advance:)
Using fopen() allows you to use the C stdio library, which can be a lot more convenient than working directly with file descriptors. For example, there's no built-in equivalent to fprintf(...) with file descriptors.
Unless you're in need of doing low level I/O, the stdio functions serve the vast majority of applications very well. It's more convenient, and, in the normal cases, just as fast when used correctly.

C fopen vs open

Is there any reason (other than syntactic ones) that you'd want to use
FILE *fdopen(int fd, const char *mode);
or
FILE *fopen(const char *path, const char *mode);
instead of
int open(const char *pathname, int flags, mode_t mode);
when using C in a Linux environment?
First, there is no particularly good reason to use fdopen if fopen is an option and open is the other possible choice. You shouldn't have used open to open the file in the first place if you want a FILE *. So including fdopen in that list is incorrect and confusing because it isn't very much like the others. I will now proceed to ignore it because the important distinction here is between a C standard FILE * and an OS-specific file descriptor.
There are four main reasons to use fopen instead of open.
fopen provides you with buffering IO that may turn out to be a lot faster than what you're doing with open.
fopen does line ending translation if the file is not opened in binary mode, which can be very helpful if your program is ever ported to a non-Unix environment (though the world appears to be converging on LF-only (except IETF text-based networking protocols like SMTP and HTTP and such)).
A FILE * gives you the ability to use fscanf and other stdio functions.
Your code may someday need to be ported to some other platform that only supports ANSI C and does not support the open function.
In my opinion the line ending translation more often gets in your way than helps you, and the parsing of fscanf is so weak that you inevitably end up tossing it out in favor of something more useful.
And most platforms that support C have an open function.
That leaves the buffering question. In places where you are mainly reading or writing a file sequentially, the buffering support is really helpful and a big speed improvement. But it can lead to some interesting problems in which data does not end up in the file when you expect it to be there. You have to remember to fclose or fflush at the appropriate times.
If you're doing seeks (aka fsetpos or fseek the second of which is slightly trickier to use in a standards compliant way), the usefulness of buffering quickly goes down.
Of course, my bias is that I tend to work with sockets a whole lot, and there the fact that you really want to be doing non-blocking IO (which FILE * totally fails to support in any reasonable way) with no buffering at all and often have complex parsing requirements really color my perceptions.
open() is a low-level os call. fdopen() converts an os-level file descriptor to the higher-level FILE-abstraction of the C language. fopen() calls open() in the background and gives you a FILE-pointer directly.
There are several advantages to using FILE-objects rather raw file descriptors, which includes greater ease of usage but also other technical advantages such as built-in buffering. Especially the buffering generally results in a sizeable performance advantage.
fopen vs open in C
1) fopen is a library function while open is a system call.
2) fopen provides buffered IO which is faster compare to open which is non buffered.
3) fopen is portable while open not portable (open is environment specific).
4) fopen returns a pointer to a FILE structure(FILE *); open returns an integer that identifies the file.
5) A FILE * gives you the ability to use fscanf and other stdio functions.
Unless you're part of the 0.1% of applications where using open is an actual performance benefit, there really is no good reason not to use fopen. As far as fdopen is concerned, if you aren't playing with file descriptors, you don't need that call.
Stick with fopen and its family of methods (fwrite, fread, fprintf, et al) and you'll be very satisfied. Just as importantly, other programmers will be satisfied with your code.
If you have a FILE *, you can use functions like fscanf, fprintf and fgets etc. If you have just the file descriptor, you have limited (but likely faster) input and output routines read, write etc.
open() is a system call and specific to Unix-based systems and it returns a file descriptor. You can write to a file descriptor using write() which is another system call.
fopen() is an ANSI C function call which returns a file pointer and it is portable to other OSes. We can write to a file pointer using fprintf.
In Unix:
You can get a file pointer from the file descriptor using:
fP = fdopen(fD, "a");
You can get a file descriptor from the file pointer using:
fD = fileno (fP);
Using open, read, write means you have to worry about signal interaptions.
If the call was interrupted by a signal handler the functions will return -1
and set errno to EINTR.
So the proper way to close a file would be
while (retval = close(fd), retval == -1 && ernno == EINTR) ;
I changed to open() from fopen() for my application, because fopen was causing double reads every time I ran fopen fgetc . Double reads were disruptive of what I was trying to accomplish. open() just seems to do what you ask of it.
open() will be called at the end of each of the fopen() family functions. open() is a system call and fopen() are provided by libraries as a wrapper functions for user easy of use
Depends also on what flags are required to open. With respect to usage for writing and reading (and portability) f* should be used, as argued above.
But if basically want to specify more than standard flags (like rw and append flags), you will have to use a platform specific API (like POSIX open) or a library that abstracts these details. The C-standard does not have any such flags.
For example you might want to open a file, only if it exits. If you don't specify the create flag the file must exist. If you add exclusive to create, it will only create the file if it does not exist. There are many more.
For example on Linux systems there is a LED interface exposed through sysfs. It exposes the brightness of the led through a file. Writing or reading a number as a string ranging from 0-255. Of course you don't want to create that file and only write to it if it exists. The cool thing now: Use fdopen to read/write this file using the standard calls.
opening a file using fopen
before we can read(or write) information from (to) a file on a disk we must open the file. to open the file we have called the function fopen.
1.firstly it searches on the disk the file to be opened.
2.then it loads the file from the disk into a place in memory called buffer.
3.it sets up a character pointer that points to the first character of the buffer.
this the way of behaviour of fopen function
there are some causes while buffering process,it may timedout. so while comparing fopen(high level i/o) to open (low level i/o) system call , and it is a faster more appropriate than fopen.

Resources