I am read from a file like this:
#include <stdio.h>
int main() {
FILE *fp = fopen("sorted_hits", "r+");
while(!feof(fp)) {
int item_read;
int *buffer = (int *)malloc(sizeof(int));
item_read = fread(buffer, sizeof(int), 1, fp);
if(item_read == 0) {
printf("at file %ld\n", ftell(fp));
perror("read error:");
}
}
}
This file is big and I got the "Bad file descriptor" error sometimes. "ftell" indicates that the file position stopped when error occurred.
I don't know why it is "sometimes", is that normal? does the problem lie in my code or in my hard disk? How to handle this?
perror prints whatever is in errno as a descriptive string. errno gets set to an error code whenever a system call has an error return. But, if a system call DOESN'T fail, errno doesn't get modified and will continue to contain whatever it contained before. Now if fread returns 0, that means that either there was an error OR you reached the end of the file. In the latter case, errno is not set and might contain any random garbage from before.
So in this case, the "Bad file descriptor" message you're getting probably just means there hasn't been an error at all. You should be checking ferror(fp) to see if an error has occurred.
You seem to be mixing text and binary modes when reading the file.
Normally when you use fread you read from a binary file i.e. fread reads a number of bytes matching the buffer size but you seem to be opening the file in text mode (r+). ftell doesn't work reliably on files opened in text mode because newlines are treated differently than other characters.
Open the file in binary mode (untranslated) instead:
FILE *fp = fopen("sorted_hits", "rb+");
If that's really what your loop looks like, my guess would be that you're probably getting a more or less spurious error because your process is just running out of memory because your loop is leaking it so badly (calling malloc every iteration of your loop, but no matching call to free anywhere).
It's also possible (but a lot less likely) that you're running into a little problem from your (common but nearly always incorrect) use of while (!feof(fp)).
Your all to printf also gives undefined behavior because you've mismatched the conversion and the type (though on many current systems it's irrelevant because long and int are the same size).
Fixing those may or may not remove the problem you've observed, but at least if you still see it, you'll have narrowed down the possibilities of what may be causing the problem.
int main() {
FILE *fp = fopen("sorted_hits", "r+");
int buffer;
while(0 != fread(&buffer, sizeof(int), 1, fp))
; // read file but ignore contents.
if (ferror(fp)) {
printf("At file: %ld\n", ftell(fp));
perror("read error: ");
}
}
Related
I'm learning how to read content from a file in C. And I manage to scrape through the following code.
#include <stdio.h>
#include <stdlib.h>
void read_content(FILE *file) {
char *x = malloc(20);
// read first 20 char
int read = fread(x,sizeof(char),20,file);
if (read != 20) {
printf("Read could not happen\n");
}
else {
printf("the content read is %s",x);
}
free(x);
return;
}
int main(int argc,char *argv[]) {
FILE *fp;
fp = fopen("test.txt","w+");
read_content(fp);
fclose(fp);
return 0;
}
But for some reason (which I'm not able to understand) I see the read bytes count as 0.
The problem is that you open the file with the w+ mode. There are two possibilities:
if the file doesn't exist, it will be created empty. Reading from it immediately gives end of file resulting in fread() returning 0.
if the file does exist, it will be truncated i.e. changed into an empty file. Reading from it immediately gives end of file resulting in fread() returning 0.
If you just want to read from the file (as per your example), open it in mode r. If you want to read and write without destroying its existing content, use mode r+.
Whatever mode you choose, always check that fopen() returns non null and print the error if it returns null (this is not the cause of your problem but is best practice).
From Man Page w+ flag:
Open for reading and writing. The file is created if it does
not exist, otherwise it is truncated.
You are probably trying to open a file which doesn't exist at the path you provided, or is read-only as #WhozCraig suggested in comment. This means a new file is being created, an empty file! hence you are seeing 0 bytes read.
To sum up, The fopen is failing, in that case you need to check the return value if it is equal to -1.
To find what was the error, you can check the errno as it is set to
indicate the error.
If you are only intending to read, open the file with r flag instead of w+
The problem lies within this line of code:
fp = fopen("test.txt","w+")
the "w+" mode, clear the previous content of the file and the file will be empty when you just going to read the file without writing anything to it. Hence, it is printing "Read could not happen" because you are trying to read an empty file.
I would suggest you to use "r+" mode, if you are willing to read and then write into the file. Otherwise, r mode is good enough for simple reading of a file.
Is it required to check the return value of fclose? If we have successfully opened a file, what are the chances that it may fail to close?
When you fwrite to a file, it may not actually write anything, it may stay in a buffer (inside the FILE object). Calling fflush would actually write it to disk. That operation may fail, for example if you just ran out of disk space, or there is some other I/O error.
fclose flushes the buffers implicitly too, so it may fail for the same reasons.
From comp.lang.c:
The fclose() call can fail, and should
be error-checked just as assiduously
as all the other file operations.
Sounds pedantic, right? Wrong. In a
former life, my company's product
managed to destroy a customer's data
by omitting a check for failure when
closing a file. The sequence went
something like (paraphrased):
stream = fopen(tempfile, "w");
if (stream == NULL) ...
while (more_to_write)
if (fwrite(buffer, 1, buflen, stream) != buflen) ...
fclose (stream);
/* The new version has been written successfully. Delete
* the old one and rename.
*/
remove (realfile);
rename (tempfile, realfile);
Of course, what happened was that
fclose() ran out of disk space trying
to write the last couple blocks of
data, so the `tempfile' was truncated
and unusable. And since the fclose()
failure wasn't detected, the program
went right ahead and destroyed the
best extant version of the data in
favor of the damaged version. And, as
Murphy would have it, the victim in
this particular incident was the
person in charge of the customer's
department, the person with authority
to buy more of our product or replace
it with a competitor's product --
and, natch, a person who was already
unhappy with us for other reasons.
It'd be a stretch to ascribe all of
the ensuing misery to this single
omission, but it may be worth pointing
out that both the customer and my
former company have since vanished
from the corporate ecology.
CHECK THOSE FAILURE CODES!
You could (and should) report the error, but in a sense, the stream is still closed:
After the call to fclose(), any use of stream results in undefined behavior.
fclose() will flush any unwritten output (via fflush()) before returning, so the error results from the underlying write() won't be reported at fwrite() or fprintf() time, but when you do the fclose(). As a result, any error that write() or fflush() can generate can be generated by fclose().
fclose() will also call close() which can generate errors on NFS clients, where the changed file isn't actually uploaded to the remote server until close() time. If the NFS server crashed, then the close() will fail, and thus fclose() will fail as well. This might be true of other networked filesystems.
You must ALWAYS check result of fclose()
Let's say you're generating data. You have old data that you fread() from a file, and then do some processing on the data, generate more data, and then write it to a new file. You are careful to not overwrite the old file because you know that trying to create new file might fail, and you would like to keep your old data in that case (some data is better than no data). After finishing all the fwrite()s, which all succeed (because you meticulously checked the return value from fwrite()), you fclose() the file. Then, you rename() your just-written file and overwrite the old file.
If fclose() failed because of write error (disk full?), you just overwrote your last good file with something that might be junk. Oops.
So, if it is critical, you should check the return value of fclose().
In terms of code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *ifp = fopen("in.dat", "rb");
FILE *ofp = fopen("out.dat", "wb");
char buf[BUFSIZ];
size_t n;
int success = 1;
if (ifp == NULL) {
fprintf(stderr, "error opening in.dat\n");
perror("in.dat");
return EXIT_FAILURE;
}
if (ofp == NULL) {
fclose(ifp);
fprintf(stderr, "error opening out.dat\n");
perror("out.dat");
return EXIT_FAILURE;
}
while ((n = fread(buf, 1, sizeof buf, ifp)) > 0) {
size_t nw;
if ((nw = fwrite(buf, 1, n, ofp)) != n) {
fprintf(stderr, "error writing, wrote %lu bytes instead of %lu\n",
(unsigned long)n,
(unsigned long)nw);
fclose(ifp);
fclose(ofp);
return EXIT_FAILURE;
}
}
if (ferror(ifp)) {
fprintf(stderr, "ferror on ifp\n");
fclose(ofp);
fclose(ifp);
return EXIT_FAILURE;
}
#ifdef MAYLOSE_DATA
fclose(ofp);
fclose(ifp);
rename("out.dat", "in.dat"); /* Oops, may lose data */
#else
if (fclose(ofp) == EOF) {
perror("out.dat");
success = 0;
}
if (fclose(ifp) == EOF) {
perror("in.dat");
success = 0;
}
if (success) {
rename("out.dat", "in.dat"); /* Good */
}
#endif
return EXIT_SUCCESS;
}
In the above code, we have been careful about fopen(), fwrite(), and fread(), but even then, not checking fclose() may result in data loss (when compiled with MAYLOSE_DATA defined).
One reason fclose can fail is if there is any data still buffered and the implicit fflush fails. What I recommend is to always call fflush and do any error handling there.
I've seen many times fclose() returning non-zero.
And on careful examination found out that the actual problem was with the write and not fclose.
Since the stuff being written are being buffered before actual write happens and when an fclose() is called all the buffer is flushed. So any problem in writing of the buffered suff, say like disk full, appears during fclose(). As David Yell says, to write a bullet proof application you need to consider the return value of fclose().
The fclose man page cites that it may fail for any of the reasons that close or fflush may fail.
Quoting:
The close() system call will fail if:
[EBADF] fildes is not a valid, active file descriptor.
[EINTR] Its execution was interrupted by a signal.
[EIO] A previously-uncommitted write(2) encountered an
input/output error.
fflush can fail for reasons that write() would fail, basically in the event that you can't actually write to/save the file.
In a sense, closing a file never fails: errors are returned if a pending write operation failed, but the stream will be closed.
To avoid problems and ensure (as far as you can from a C program), I suggest you:
Properly handle errors returned by fwrite().
Call fflush() before closing the stream. Do remember to check for errors returned by fflush().
If, in the context of your application, you can think of something useful to do if fclose() fails, then test the return value. If you can't, don't.
I have a random access file opened in "r+b" mode with records of equal length. Can I change the contents of a record after reading it and overwrite in place?
I tried the following code but on running I get: Segmentation fault(core dumped)
#include<stdio.h>
int main()
{
struct tala {
int rec_no;
long file_no;
};
FILE *file_locking;
struct tala t,f;
file_locking = fopen("/path/to/my/file.bin", "rb+");
t.rec_no = 1;
t.file_no = 3;
if (fwrite(&t, sizeof(struct tala),1,file_locking)==0)
printf("Error opening file");
t.rec_no=0;
rewind(file_locking);
if (fwrite(&t, sizeof(struct tala),1,file_locking)==0)
printf("Error opening file");
rewind(file_locking);
if (fread(&f, sizeof(struct tala),1,file_locking)==0)
printf("Error opening file");
printf("\n %d",f.rec_no);
printf("\n %ld", f.file_no);
fclose(file_locking);
}
Yes you can. Just remember to always fseek between reads and writes.
Quote the fopen man page:
Reads and writes may be intermixed on read/write streams in any order. Note that ANSI C requires that a file positioning function intervene between output and input, unless an input operation encounters end-of-file.
Extra tip: always check the return value of fopen and related functions, and handle errors (use perror or strerror to print out what failed).
Yes. The only thing to pay attention is that you have to call flush or a file positioning function before switching from output to input and call a file positioning function or be at end of file before switching from read to write.
i am trying to read one byte at a time from a file:
size_t result_new = 1;
char buf6[1];
if( (result_new = fread(buf6, 1, 1, pFile)) != 1)
{
printf("result_new = %d\n", result_new);
printf("Error reading file\n");
exit(1);
}
result_new is becoming 0 and it is printing the error. any idea what can be wrong. im sure pFile is fine.
thanks
According to the documentation:
fread() and fwrite() return the
number of items successfully read or
written (i.e., not the number of
characters). If an error occurs, or
the end-of-file is reached, the
return value is a short item count (or zero).
So why don't you check error code that will answer your question? You can use perror, for example.
If you only need one byte, getc would be a much better choice than fread. The interface is simpler and it's likely to be a lot faster.
http://www.cplusplus.com/reference/clibrary/cstdio/fread/ has an example with reading from a file. It is a c++ page but should work for c
Keep in mind when using fread and fwrite that strange errors can occur in some cases when the file is opened for normal text writing. Opening the file for binary will eliminate this potential problem. It's mainly due to "new lines", which seem for some reason to differ between binary and text file reading and writing.
Is it required to check the return value of fclose? If we have successfully opened a file, what are the chances that it may fail to close?
When you fwrite to a file, it may not actually write anything, it may stay in a buffer (inside the FILE object). Calling fflush would actually write it to disk. That operation may fail, for example if you just ran out of disk space, or there is some other I/O error.
fclose flushes the buffers implicitly too, so it may fail for the same reasons.
From comp.lang.c:
The fclose() call can fail, and should
be error-checked just as assiduously
as all the other file operations.
Sounds pedantic, right? Wrong. In a
former life, my company's product
managed to destroy a customer's data
by omitting a check for failure when
closing a file. The sequence went
something like (paraphrased):
stream = fopen(tempfile, "w");
if (stream == NULL) ...
while (more_to_write)
if (fwrite(buffer, 1, buflen, stream) != buflen) ...
fclose (stream);
/* The new version has been written successfully. Delete
* the old one and rename.
*/
remove (realfile);
rename (tempfile, realfile);
Of course, what happened was that
fclose() ran out of disk space trying
to write the last couple blocks of
data, so the `tempfile' was truncated
and unusable. And since the fclose()
failure wasn't detected, the program
went right ahead and destroyed the
best extant version of the data in
favor of the damaged version. And, as
Murphy would have it, the victim in
this particular incident was the
person in charge of the customer's
department, the person with authority
to buy more of our product or replace
it with a competitor's product --
and, natch, a person who was already
unhappy with us for other reasons.
It'd be a stretch to ascribe all of
the ensuing misery to this single
omission, but it may be worth pointing
out that both the customer and my
former company have since vanished
from the corporate ecology.
CHECK THOSE FAILURE CODES!
You could (and should) report the error, but in a sense, the stream is still closed:
After the call to fclose(), any use of stream results in undefined behavior.
fclose() will flush any unwritten output (via fflush()) before returning, so the error results from the underlying write() won't be reported at fwrite() or fprintf() time, but when you do the fclose(). As a result, any error that write() or fflush() can generate can be generated by fclose().
fclose() will also call close() which can generate errors on NFS clients, where the changed file isn't actually uploaded to the remote server until close() time. If the NFS server crashed, then the close() will fail, and thus fclose() will fail as well. This might be true of other networked filesystems.
You must ALWAYS check result of fclose()
Let's say you're generating data. You have old data that you fread() from a file, and then do some processing on the data, generate more data, and then write it to a new file. You are careful to not overwrite the old file because you know that trying to create new file might fail, and you would like to keep your old data in that case (some data is better than no data). After finishing all the fwrite()s, which all succeed (because you meticulously checked the return value from fwrite()), you fclose() the file. Then, you rename() your just-written file and overwrite the old file.
If fclose() failed because of write error (disk full?), you just overwrote your last good file with something that might be junk. Oops.
So, if it is critical, you should check the return value of fclose().
In terms of code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *ifp = fopen("in.dat", "rb");
FILE *ofp = fopen("out.dat", "wb");
char buf[BUFSIZ];
size_t n;
int success = 1;
if (ifp == NULL) {
fprintf(stderr, "error opening in.dat\n");
perror("in.dat");
return EXIT_FAILURE;
}
if (ofp == NULL) {
fclose(ifp);
fprintf(stderr, "error opening out.dat\n");
perror("out.dat");
return EXIT_FAILURE;
}
while ((n = fread(buf, 1, sizeof buf, ifp)) > 0) {
size_t nw;
if ((nw = fwrite(buf, 1, n, ofp)) != n) {
fprintf(stderr, "error writing, wrote %lu bytes instead of %lu\n",
(unsigned long)n,
(unsigned long)nw);
fclose(ifp);
fclose(ofp);
return EXIT_FAILURE;
}
}
if (ferror(ifp)) {
fprintf(stderr, "ferror on ifp\n");
fclose(ofp);
fclose(ifp);
return EXIT_FAILURE;
}
#ifdef MAYLOSE_DATA
fclose(ofp);
fclose(ifp);
rename("out.dat", "in.dat"); /* Oops, may lose data */
#else
if (fclose(ofp) == EOF) {
perror("out.dat");
success = 0;
}
if (fclose(ifp) == EOF) {
perror("in.dat");
success = 0;
}
if (success) {
rename("out.dat", "in.dat"); /* Good */
}
#endif
return EXIT_SUCCESS;
}
In the above code, we have been careful about fopen(), fwrite(), and fread(), but even then, not checking fclose() may result in data loss (when compiled with MAYLOSE_DATA defined).
One reason fclose can fail is if there is any data still buffered and the implicit fflush fails. What I recommend is to always call fflush and do any error handling there.
I've seen many times fclose() returning non-zero.
And on careful examination found out that the actual problem was with the write and not fclose.
Since the stuff being written are being buffered before actual write happens and when an fclose() is called all the buffer is flushed. So any problem in writing of the buffered suff, say like disk full, appears during fclose(). As David Yell says, to write a bullet proof application you need to consider the return value of fclose().
The fclose man page cites that it may fail for any of the reasons that close or fflush may fail.
Quoting:
The close() system call will fail if:
[EBADF] fildes is not a valid, active file descriptor.
[EINTR] Its execution was interrupted by a signal.
[EIO] A previously-uncommitted write(2) encountered an
input/output error.
fflush can fail for reasons that write() would fail, basically in the event that you can't actually write to/save the file.
In a sense, closing a file never fails: errors are returned if a pending write operation failed, but the stream will be closed.
To avoid problems and ensure (as far as you can from a C program), I suggest you:
Properly handle errors returned by fwrite().
Call fflush() before closing the stream. Do remember to check for errors returned by fflush().
If, in the context of your application, you can think of something useful to do if fclose() fails, then test the return value. If you can't, don't.