There is this thing that gives me headaches in C programming when I deal with reading from files.
I do not understand the difference between these 2 methods:
FILE *fd;
fd=fopen(name,"r"); // "r" for reading from file, "w" for writing to file
//"a" to edit the file
fd returns NULL if the file can't be open, right?
The second method that i use is:
int fd;
fd=open(name,O_RDONLY);
fd would be -1 if an error occurs at opening the file.
Would anyone be kind enough to explain this to me?
Thanks in advance:)
Using fopen() allows you to use the C stdio library, which can be a lot more convenient than working directly with file descriptors. For example, there's no built-in equivalent to fprintf(...) with file descriptors.
Unless you're in need of doing low level I/O, the stdio functions serve the vast majority of applications very well. It's more convenient, and, in the normal cases, just as fast when used correctly.
Related
Is there a way to open a file and write any string to the end of the file without using the O_APPEND (append) option when opening the file?
I'm coding in C in a Unix environment for the first time for class.
I know I could use lseek(fd, 0, SEEK_END) to seek the end of the file and fstat() to get the file size, but overall, I'm not sure what my code should be like.
What i have is
int fwrite = open (“abc.txt”, O_RDWR);
int fread = open
Also, this is my first time on Stack Overflow, please guide me
Yes, but no. O_APPEND has a special attribute that permits multiple processes to write to the file in non-interfering mode. So, if N independent processes open a file O_APPEND, the writes might be interleaved, but will be coherent. This is exploited in log files as an example.
If you open the file, lseek to the end, by the time you write(), the end point might have changed and you are over-writing valid data.
In short, if you need to append, use O_APPEND; if you want random access don't. The same program can open the same file in different modes.
I want to add a structure to a binary file but first i need to check whether the file has previous data stored in it, and if not i can add the structure,otherwise ill have to read all the stored data and stick the structure in its correct place, but i got confused about how to check if the file is empty ,i thought about trying something like this:
size = 0
if(fp!=NULL)
{
fseek (fp, 0, SEEK_END);
size = ftell (fp);
rewind(fp);
}
if (size==0)
{
// print your error message here
}
but if the file is empty or still not created how can the file pointer not be NULL ? whats the point of using ftell() if i can simply do something like this :
if(fp==NULL){fp=fopen("data.bin","wb");
fwrite(&struct,sizeof(struct),1,ptf);
fclose(fp);}
i know that NULL can be returned in other cases such as protected files but still i cant understand how using ftell() is effective when file pointers will always return NULL if the file is empty.any help will be appreciated :)
i need to check whether the file has previous data stored in it
There might be no portable and robust way to do that (that file might change during the check, because other processes are using it). For example, on Unix or Linux, that file might be opened by another process writing into it while your own program is running (and that might even happen between your ftell and your rewind). And your program might be running in several processes.
You could use operating system specific functions. For POSIX (including Linux and many Unixes like MacOSX or Android), you might use stat(2) to query the file status (including its size with st_size). But after that, some other process might still write data into that file.
You might consider advisory locking, e.g. with flock(2), but then you adopt the system-wide convention that every program using that file would lock it.
You could use some database with ACID properties. Look into sqlite or into RDBMS systems like PostGreSQL or MariaDB. Or indexed file library like gdbm.
You can continue coding with the implicit assumption (but be aware of it) that only your program is using that file, and that your program has at most one process running it.
if the file is empty [...] how can the file pointer not be NULL ?
As Increasingly Idiotic answered, fopen can fail, but usually don't fail on empty files. Of course, you need to handle fopen failure (see also this). So most of the time, your fp would be valid, and your code chunk (assuming no other process is changing that file simulateously) using ftell and rewind is an approximate way to check that the file is empty. BTW, if you read (e.g. with fread or fgetc) something from that file, that read would fail if your file was empty, so you probably don't need to check its emptiness before.
A POSIX specific way to query the status (including size) of some fopen-ed file is to use fileno(3) and fstat(2) together like fstat(fileno(fp), &mystat) after having declared struct stat mystat;
fopen() does not return NULL for empty files.
From the documentation:
If successful, returns a pointer to the object that controls the opened file stream ... On error, returns a null pointer.
NULL is returned only when the file could not be opened. The file could fail to open due to any number of reasons such as:
The file doesn't exist
You don't have permissions to read the file
The file cannot be opened multiple times simultaneously.
More possible reasons in this SO answer
In your case, if fp == NULL you'll need to figure out why fopen failed and handle each case accordingly. In most cases, an empty file will open just fine and return a non NULL file pointer.
I am developing a C code on Linux environment. I use fwrite to write some data to some files. The program will be run on an environment that power cut offs occur often (at least once a day). Therefore, I want fwrite to ensure that the file should not be updated if a power cut occurs while it is writing data. It should only save the file when the fwrite finishes its job. How can I use fwrite that effects the file only it finishes the writing process?
EDIT: I use fopen with wb to discard the previous info in the file and write a new file e.g.
FILE *rtng_p;
rtng_p = fopen("/etc/routing_table", "wb");
fwrite(&user_list, sizeof(struct routing), 40, rtng_p);
and it is a very small data some bytes long
First write the file to a temporary path on the same filesystem, like /etc/routing_table.tmp. Then just rename the copy on top of original file. Renames are guaranteed atomic.
So, the sequence of calls would be, fopen, fwrite, fclose, rename.
In addition of the sequence given in David Schwartz answer you could perhaps use advisory locks with e.g. flock(2) syscall (or maybe lockf(3) i.e. fcntl(2) with F_SETLK ....)
That would mean to add, just after
FILE * fil = fopen("/etc/routing_table.tmp", "wb");
the lines
if (!fil)
{ perror("/etc/routing_table.tmp"); exit(EXIT_FAILURE); };
if (flock(fileno(fil), LOCK_EX))
{ perror("flock LOCK_EX"); exit(EXIT_FAILURE); };
and at the end, you would
if (fflush(fil)) /* flush the file before unlocking it!!*/
{ perror("fflush"); exit(EXIT_FAILURE); };
if (flock(fileno(fil), LOCK_UN))
{ perror("flock LOCK_UN"); exit(EXIT_FAILURE); };
if (fclose (fil))
{ perror("fclose"); exit(EXIT_FAILURE); };;
if (rename("/etc/routing_table.tmp", "/etc/routing_table"))
{ perror("rename"); exit(EXIT_FAILURE); };
Using such advisory locking would ensure that even if two processes of your program are running, only one would write the file.
But it is overkill probably.
BTW, you seems to write binary data in /etc/. I believe it is against the habits or the conventions (see Linux Filesystem Hierarchy, or Linux Standard Base). I expect files under /etc to be textual. Perhaps you want your file under /var/lib ?
See also Advanced Linux Programming book online.
There has been a large argument going on in the UNIX/Linux community about the whether the open/write/close/rename pattern (as described in David Schwartz's answer) is actually guaranteed to be atomic. Note this conversation is about write and not fwrite!
The primary author of the EXT4 filesystem did not believe that it should be guaranteed according to POSIX and early versions of the filesystem did not treat it as atomic. Eventually he capitulated and made that set of operations atomic as the default behavior for EXT4. The claim was made, however, that user programs should actually be doing open/write/fsync/close/rename.
Other filesystems may not guarantee atomicity without the fsync, and if EXT4 is mounted with noauto_da_alloc then that guarantee is lost there as well. So if you want to be really safe you should add fsync after close before the rename. I haven't tried this with fwrite it might work if you use fflush.
See the auto_da_alloc section at https://www.kernel.org/doc/Documentation/filesystems/ext4.txt for more information. Also see an article written by the primary author of EXT4 here: http://thunk.org/tytso/blog/2009/03/12/delayed-allocation-and-the-zero-length-file-problem/
We called a library to read text, this library API only accepts a FILE* pointer. It actually reads file text by fread() call internally.
But we also need to use this library to read text from a char* string rather than a FILE*.
Of course we can write the char* string into a temp file but we're not allowed to do this for some reasons...
How to do ? Thanks !!
Check out fmemopen
The fmemopen() function shall associate the buffer given by the buf argument with a stream.
#include <stdio.h>
static char buffer[] = "foobar";
int main (void)
{
FILE *stream;
stream = fmemopen (buffer, strlen (buffer), "r");
/* You got a FILE* pointer, you can call your function here :-) */
}
It can be done, but it's not easy and quite complicated.
You can create a shared-memory file handle with shm_open, this file handle can the be used by mmap to make it point to the memory area of the string, then use fdopen to create a FILE pointer from the file descriptor.
Note: This will only work on POSIX (e.g. Linux or Mac OSX) systems. Windows systems should have similar functionality, but it still won't be easy.
Edit It's probably something similar to this that happens behind the scenes in the fmemopen call referenced in the answer by Massimo Fazzolari.
On various unix systems, you can create a pipe/socket or similar file descriptors, and use fdopen() to open the file descriptor and get a FILE* pointer. Then feed the string into the pipe/socket.
I suggest you check you program/library design when you run into strange problems like this. Strange problems/requirements are strong indications of bad designs.
Hmm, short of writing your own device driver to communicate back with the process somehow, this is a tricky one. By that, I mean you could create a character device which would, when read from, communicate via some sort of IPC (shared memory, named pipes, or other things) back to the process.
But this is (1) nasty, (2) UNIX-specific (non-portable) and (3) a very bad idea :-)
Without such low-level tricks (or with non-portable extensions that can treat memory like file handles), this cannot be done - fread expects a FILE* and will read from a file handle, that's it, really.
I don't think this can be done. fread can read only from a file stream i.e either FILE * or stdout.
Is there any reason (other than syntactic ones) that you'd want to use
FILE *fdopen(int fd, const char *mode);
or
FILE *fopen(const char *path, const char *mode);
instead of
int open(const char *pathname, int flags, mode_t mode);
when using C in a Linux environment?
First, there is no particularly good reason to use fdopen if fopen is an option and open is the other possible choice. You shouldn't have used open to open the file in the first place if you want a FILE *. So including fdopen in that list is incorrect and confusing because it isn't very much like the others. I will now proceed to ignore it because the important distinction here is between a C standard FILE * and an OS-specific file descriptor.
There are four main reasons to use fopen instead of open.
fopen provides you with buffering IO that may turn out to be a lot faster than what you're doing with open.
fopen does line ending translation if the file is not opened in binary mode, which can be very helpful if your program is ever ported to a non-Unix environment (though the world appears to be converging on LF-only (except IETF text-based networking protocols like SMTP and HTTP and such)).
A FILE * gives you the ability to use fscanf and other stdio functions.
Your code may someday need to be ported to some other platform that only supports ANSI C and does not support the open function.
In my opinion the line ending translation more often gets in your way than helps you, and the parsing of fscanf is so weak that you inevitably end up tossing it out in favor of something more useful.
And most platforms that support C have an open function.
That leaves the buffering question. In places where you are mainly reading or writing a file sequentially, the buffering support is really helpful and a big speed improvement. But it can lead to some interesting problems in which data does not end up in the file when you expect it to be there. You have to remember to fclose or fflush at the appropriate times.
If you're doing seeks (aka fsetpos or fseek the second of which is slightly trickier to use in a standards compliant way), the usefulness of buffering quickly goes down.
Of course, my bias is that I tend to work with sockets a whole lot, and there the fact that you really want to be doing non-blocking IO (which FILE * totally fails to support in any reasonable way) with no buffering at all and often have complex parsing requirements really color my perceptions.
open() is a low-level os call. fdopen() converts an os-level file descriptor to the higher-level FILE-abstraction of the C language. fopen() calls open() in the background and gives you a FILE-pointer directly.
There are several advantages to using FILE-objects rather raw file descriptors, which includes greater ease of usage but also other technical advantages such as built-in buffering. Especially the buffering generally results in a sizeable performance advantage.
fopen vs open in C
1) fopen is a library function while open is a system call.
2) fopen provides buffered IO which is faster compare to open which is non buffered.
3) fopen is portable while open not portable (open is environment specific).
4) fopen returns a pointer to a FILE structure(FILE *); open returns an integer that identifies the file.
5) A FILE * gives you the ability to use fscanf and other stdio functions.
Unless you're part of the 0.1% of applications where using open is an actual performance benefit, there really is no good reason not to use fopen. As far as fdopen is concerned, if you aren't playing with file descriptors, you don't need that call.
Stick with fopen and its family of methods (fwrite, fread, fprintf, et al) and you'll be very satisfied. Just as importantly, other programmers will be satisfied with your code.
If you have a FILE *, you can use functions like fscanf, fprintf and fgets etc. If you have just the file descriptor, you have limited (but likely faster) input and output routines read, write etc.
open() is a system call and specific to Unix-based systems and it returns a file descriptor. You can write to a file descriptor using write() which is another system call.
fopen() is an ANSI C function call which returns a file pointer and it is portable to other OSes. We can write to a file pointer using fprintf.
In Unix:
You can get a file pointer from the file descriptor using:
fP = fdopen(fD, "a");
You can get a file descriptor from the file pointer using:
fD = fileno (fP);
Using open, read, write means you have to worry about signal interaptions.
If the call was interrupted by a signal handler the functions will return -1
and set errno to EINTR.
So the proper way to close a file would be
while (retval = close(fd), retval == -1 && ernno == EINTR) ;
I changed to open() from fopen() for my application, because fopen was causing double reads every time I ran fopen fgetc . Double reads were disruptive of what I was trying to accomplish. open() just seems to do what you ask of it.
open() will be called at the end of each of the fopen() family functions. open() is a system call and fopen() are provided by libraries as a wrapper functions for user easy of use
Depends also on what flags are required to open. With respect to usage for writing and reading (and portability) f* should be used, as argued above.
But if basically want to specify more than standard flags (like rw and append flags), you will have to use a platform specific API (like POSIX open) or a library that abstracts these details. The C-standard does not have any such flags.
For example you might want to open a file, only if it exits. If you don't specify the create flag the file must exist. If you add exclusive to create, it will only create the file if it does not exist. There are many more.
For example on Linux systems there is a LED interface exposed through sysfs. It exposes the brightness of the led through a file. Writing or reading a number as a string ranging from 0-255. Of course you don't want to create that file and only write to it if it exists. The cool thing now: Use fdopen to read/write this file using the standard calls.
opening a file using fopen
before we can read(or write) information from (to) a file on a disk we must open the file. to open the file we have called the function fopen.
1.firstly it searches on the disk the file to be opened.
2.then it loads the file from the disk into a place in memory called buffer.
3.it sets up a character pointer that points to the first character of the buffer.
this the way of behaviour of fopen function
there are some causes while buffering process,it may timedout. so while comparing fopen(high level i/o) to open (low level i/o) system call , and it is a faster more appropriate than fopen.