error while using fopen frequently - c

I have this function
char* Readfiletobuffer(char* file, FILE* fp){
char * buffer;
int file_size;
fp = fopen(file, "r");
if (fp != NULL) {
fseek(fp, 0, SEEK_END);
file_size = ftell(fp);
buffer = (char*) malloc((file_size + 1) * sizeof(char));
fseek(fp, 0, SEEK_SET);
fread(buffer, file_size, 1, fp);
buffer[file_size] = '\0';
return buffer;
} else {
printf("error loading file");
}
fclose(fp);
}
which I call 1050 times in my program and at the 1019th time fopen() returns a NULL pointer.
It doesn't depend on the file, it's always the 1019th time so I think it's something with freeing memory but why isn't the fclose() call enough?
Does someone have an idea?

Your program can tell you with errno, the global variable where many functions assign their error code to when they fail. Combined with strerror to provide a human readable error message, you'd change your error handling to something like this.
#include <errno.h>
#include <string.h>
...
fp = fopen(file, "r");
if (fp == NULL) {
fprintf(stderr, "Could not open '%s': %s", file, strerror(errno));
exit(1);
}
fseek(fp, 0, SEEK_END);
file_size = ftell(fp);
...
Note the use of early exit to eliminate having nest the whole function in an if/else block.
Also note that you're failing to check the rest of your file operations. fseek, ftell, and fread can all fail. You need similar checks for all of them. Rather than littering your code with error handling, and probably forgetting to do it in a few places, I recommend writing little wrappers.
FILE *open_file(const char *filename, const char *mode) {
FILE *fp = fopen(filename, mode);
if( fp == NULL ) {
fprintf(
stderr, "Could not open '%s' for '%s': %s\n",
filename, mode, strerror(errno)
);
exit(1);
}
return fp;
}
Note that this isn't the best error handling, it simply exits on error. At this stage in your learning C, it's probably best to just bail out on an error. If you did something like return NULL odds are you won't have the error handling to handle a null pointer and it will just bounce around causing mysterious problems and crashes later in the code. For now it's best to halt and catch fire as close to the error as possible.
Spoiler alert: your process ran out of file handles because you're not closing your files. As #BLUEPIXY correctly points out in the comments your fclose is after you return normally and will only happen if the file fails to open.
Since you're passing in the file pointer, maybe you intend to use it later? In that case you can't hold onto that many open files and you'll have to redesign your code. If not, there's no reason to pass it in since the function is opening it.
You should have gotten a warning like this, if you had warnings turned on with -Wall.
test.c:23:1: warning: control may reach end of non-void function [-Wreturn-type]
}
If the file fails to open, nothing gets returned, and that's not ok.
Don't ignore your warnings, fix all of them. Investigating this warning would have pointed you at the problem.
Check all your file operations to make sure they succeeded.
Include strerror(errno) in your error messages so you know why it failed.
Investigate and fix all your warnings.

Related

Passing a file descriptor from `open_memstream` to `dup2`

I am trying to redirect output from an exec()ed function into a buffer, so I though I would try and use open_memstream to handle the dynamic buffering
I put together this to test it out:
#include <stdio.h>
#include <unistd.h>
int main() {
char* buffer;
size_t buffer_len;
FILE* stream = open_memstream(&buffer, &buffer_len);
if(!stream) perror("Something went wrong with `open_memstream`!");
fflush(stream);
puts("Start");
if(dup2(fileno(stream), STDOUT_FILENO) == -1) perror("Something went wrong!");
puts("Internal");
fclose(stream);
FILE* f = fopen("out.txt", "w+");
fputs(buffer, f);
fclose(f);
}
But running it gives me the error bad file descriptor on dup2, which shouldn't be the case since open_memstream doesn't return NULL which it is supposed to do on error.
Is there something about the implementation of open_memstream that makes it nonviable to manipulate its underlying descriptor? Or am I just being dumb and using a function wrong?
Cheers in advance for any help given, and if this is impossible to do with open_memstream, is there a way to handle it with FILE* instead of using fds directly?
You should check return value (and subsequently errno) after every operation that can go wrong. Here, you are missing a check for fileno(stream) return value.
FILE* stream = open_memstream(&buffer, &buffer_len);
if(!stream) perror("Failed to open_memstream");
int fd = fileno(stream);
if (fd == -1) {
perror("Failed to get memstream fileno");
exit(1);
}
When you add the above, your program will fail with message
Failed to get memstream fileno: Bad file descriptor
The reason for this failure is already explained in comments on the question.
Have look at open with the O_TMPFILE parameter, or at memfd_create, which is similar to open_memstream but returns a file descriptor.
These approaches force you to forgo the convenience of having &buffer, &buffer_len. But nothing is actually lost. One can use lseek to learn the tmp file size and then mmap to access it as a memory buffer, getting all the conveniences back.

Reading parameter from file and creating filenames

I want to read a name from a file (for example config_file.txt with only one entry like run)
and then create filenames with that, like run0.txt, run1.txt and so on.
But I get something like run..0.txt with two black dots.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAXCHAR 1000
void generate(char const *fileName);
int main(int argc, char** argv) {
generate("config_file.txt");
}
void generate(char const *fileName) {
char id[MAXCHAR];
FILE *fp;
char str[MAXCHAR];
fp = fopen(fileName, "r");
if (fp == NULL) {
printf("Could not open file %s", fileName);
return 1;
}
while (fgets(str, MAXCHAR, fp) != NULL) {
strcpy(id, str);
}
fclose(fp);
FILE *filePtr;
char filename[100];
for(int i = 0;i < 8;i++){
sprintf(filename, "%s%d.txt", id,i);
filePtr = fopen(filename, "w");
}
fclose(filePtr);
}
As I noted in the comments, your code does not zap the newline that fgets() normally preserves as it reads lines before trying to add the extension to it.
The simple and reliable method for zapping the newline is:
str[stcspn(str, "\n")] = '\0';
There are alternatives that might be more efficient (though efficiency is probably a red herring here — creating files takes a lot longer than reading through a short line of characters), but you have to get a variety of conditions right (empty buffer, buffer with no newline, etc).
You also have:
if (fp == NULL) {
printf("Could not open file %s", fileName);
return 1;
}
You should report errors on stderr instead of stdout.
You might consider including the error number and/or error message.
You should finish the message with a newline.
You can't write return 1; in a function returning void — the compiler must complain about that.
C11 §6.8.6.4 The return statement:
¶1 A return statement with an expression shall not appear in a function whose return type is void. A return statement without an expression shall only appear in a function whose return type is void.
Hence, you should consider writing:
if (fp == NULL)
{
fprintf(stderr, "Could not open file %s for reading: %s\n", fileName, strerror(errno));
return;
}
I normally use a set of error reporting functions that I wrote, one of which tells the library the name of the program (err_setarg0(argv[0]); in main()) and the others of which produce error messages as desired. This code is available in my SOQ (Stack Overflow Questions) repository on GitHub as files stderr.c and stderr.h in the src/libsoq sub-directory.
I'd write:
if (fp == NULL)
err_syserr("failed to open file '%s' for reading: ", fileName);
The function doesn't return. If I wanted to return, I'd use err_sysrem() ('remark') and arrange a return. The sys part of the name means that the error number and message are automatically reported too. I prefer these to perror() because perror() doesn't make it easy to get the program name etc into the error message.
There are analogous libraries available on some systems — err(3) on macOS, and also available on Linux (err(3), does roughly the same job.

can't access a place in memory

I'm trying to read a binary file of 32 bytes in C, however I'm keep getting "segmentation fault (code dumped)" when I run my program,
it would be great if somebody can help me out by pointing where did I go wrong?.
my code is here below:
int main()
{
char *binary = "/path/to/myfiles/program1.ijvm";
FILE *fp;
char buffer[32];
// Open read-only
fp = fopen(binary, "rb");
// Read 128 bytes into buffer
fread (buffer, sizeof(char), 32, fp);
return 0;
}
It's because of the path. Make sure that "/path/to/myfiles/program1.ijvm" points to an existing file.
You should always check the return value of fopen.
\\Open read-only
fp = fopen(binary, "rb");
if(fp==NULL){
perror("problem opening the file");
exit(EXIT_FAILURE);
}
Notice also that you are reading 32 bytes in your buffer and not 128 as your comment says.
You must check the return result from fopen().
I'm assuming you are getting the segfault in the fread() call because your data file doesn't exist, or couldn't be opened, and you are trying to work on a NULL FILE structure.
See the following safe code:
#include <stdio.h>
#include <stdint.h>
#define SIZE_BUFFER 32
int main()
{
char *binary = "data.txt";
FILE *fp = NULL;
char buffer[SIZE_BUFFER];
// Open read-only
fp = fopen(binary, "rb");
// Read SIZE_BUFFER bytes into buffer
if( fp )
{
printf("Elements read %ld\n", fread (buffer, sizeof(char), SIZE_BUFFER, fp));
fclose(fp);
}
else
{
// Use perror() here to show a text description of what failed and why
perror("Unable to open file: ");
}
return 0;
}
When I execute this code it doesn't crash and will print the number of elements read if the file is opened or it will print "Unable to open file" if the file could not be opened.
As mentioned in the comments you should also close the file being exiting. Another thing you can do is the following:
FILE *fp = fopen(.....);
Instead of declaring and assigning in two separate steps.
There are two possible reasons
The fopen(3) function failed due to some reason, which means fp is NULL, and then you are trying to use the null-pointer in fread(3). This can crash. #OznOg has already given a subtle hint to look into this direction.
If the fopen call is a success (i.e. fp is non-NULL after calling fopen), the code can still crash because you are reading 32 chars into the variable binary, while binary has been initialized with only 30 chars.

Chmod in C assigning wrong permissions

The following is my code for a method that copies a file from a path to a file to a directory provided as the destination. The copy works perfectly fine, however my chmod call assigns the wrong permissions to the copied file in the destination. If the permission in the source is 644, the copied file has a permission of 170 or 120.
I have been attempting to debug this for hours and it's driving me slightly crazy so any help is greatly appreciated.
void copy_file(char* src, char* dest) {
char a;
//extract file name through a duplicate ptr
char* fname = strdup(src);
char* dname = basename(fname);
//open read and write streams
FILE* read;
FILE* write;
read = fopen(src, "r");
chdir(dest);
write = fopen(dname, "w");
//error checking
if (read == NULL) //|| (write == NULL))
{
perror("Read Error: ");
exit(0);
}
else if (write == NULL)
{
perror("Write Error: ");
exit(0);
}
//write from src to dest char by char
while (1){
a = fgetc(read);
if (a == EOF)
{
break;
}
fputc(a, write);
}
//close files
fclose(read);
fclose(write);
// this is where I attempt to assign source file permissions
//and it goes horribly wrong
struct stat src_st;
if(stat(src, &src_st)){
perror("stat: ");
}
chmod(dname, src_st.st_mode);
printf("%o\n", src_st.st_mode & 0777);
}
You fopen(src, "r"), then you chdir(dest). This means that when you later call stat(src, &src_st), there is no reason to think that stat will access the same file as fopen did, or indeed that stat will access any file at all.
If stat fails, you proceed to call chmod anyway, so you pass whatever random junk was in src_st.st_mode to chmod.
You should use fstat(fileno(read), &src_st) before calling fclose(src), instead of calling stat(src, &src_st).
The basic problem is you have to check your system calls like fopen, chdir, and stat immediately.
For example, first thing I tried was copy_file( "test.data", "test2.data" ) not realizing it expected a destination directory.
char* fname = strdup(src);
char* dname = basename(fname);
dname is now test.data, same as the source.
read = fopen(src, "r"); // succeeds
chdir(dest); // fails
write = fopen(dname, "w"); // blows away test.data, the source
You do eventually check read and write, but after the damage has been done.
Blowing away your source file is really bad. It's important that your code deals with failed system calls. If you don't, it will sail along causing confusion and destruction.
Most system calls in C return 0 for success. This is an anti-pattern where the return value is an error flag, so false is failure, and anything else indicates what kind of error (though stat doesn't use that, it uses errno).
When it fails, stat returns -1 which is true. So this is the wrong way around.
struct stat src_st;
if(stat(src, &src_st)){
perror("stat: ");
}
Instead, you have to check for non-zero.
struct stat src_st;
if(stat(src, &src_st) != 0 ){
// Note that I don't use perror, it doesn't provide enough information.
fprintf(stderr, "Could not stat %s: %s\n", src, strerror(errno));
exit(1);
}
As you can guess this gets tedious in the extreme, and you're going to forget, or do it slightly different each time. You'll want to write wrappers around those functions to do the error handling for you.
FILE *fopen_checked( const char *file, const char *mode ) {
FILE *fp = fopen(file, mode);
if( file == NULL ) {
fprintf(stderr, "Could not open '%s' for '%s': %s", file, mode, strerror(errno));
exit(1);
}
return fp;
}
It's not the best error handling, but it will at least ensure your code appropriately halts and catches fire.
A note about chdir: if you can avoid it don't use it. chdir affects the global state of the program, the current working directory, and globals add complexity to everything. It's very, very easy for a function to change directory and not change back, as yours does. Now your process is in a weird state.
For example, if one did copy_file( "somefile", "foo" ) this leaves the program in foo/. If they then did copy_file( "otherfile", "foo" ) they'd be trying to copy foo/otherfile to foo/foo/otherfile.
And, as #robmayoff pointed out, your stat fails because the process is now in a different directory. So even the function doing the chdir is confused by it.
Ensuring that your functions always chdir back to the original directory in a language like C is very difficult and greatly complicates error handling. Instead, stay in your original directory and use functions like basename to join paths together.
Finally, avoid mixing your file operations. Use filenames or use file descriptors, but try not to use both. That means if you're using fopen, use fstat and fchmod. You might have to use fileno to get a file descriptor out of the FILE pointer.
This avoids having to carry around and keep in sync two pieces of data, the file descriptor and the filename. It also avoids issues with chdir or the file being renamed or even deleted, the file descriptor will still work so long as it remains open.
This is also a problem:
char a;
...
while (1){
a = fgetc(read);
if (a == EOF)
{
break;
}
fputc(a, write);
}
fgetc() returns int, not char. Per the C Standard, 7.21.7.1 The fgetc function:
7.21.7.1 The fgetc function
Synopsis
#include <stdio.h>
int fgetc(FILE *stream);
Assuming sizeof( int ) > sizeof( char ), char values are signed, 2s-complement integers, and EOF is an int defined to be -1 (all very common values), reading a file with char a = fgetc( stream ); will fail upon reading a valid 0xFF character value. And if your implementation's default char value is unsigned char, char a = fgetc( stream ); will never produce a value that matches EOF.

why fread sometimes encounters "Bad file descriptor"?

I am read from a file like this:
#include <stdio.h>
int main() {
FILE *fp = fopen("sorted_hits", "r+");
while(!feof(fp)) {
int item_read;
int *buffer = (int *)malloc(sizeof(int));
item_read = fread(buffer, sizeof(int), 1, fp);
if(item_read == 0) {
printf("at file %ld\n", ftell(fp));
perror("read error:");
}
}
}
This file is big and I got the "Bad file descriptor" error sometimes. "ftell" indicates that the file position stopped when error occurred.
I don't know why it is "sometimes", is that normal? does the problem lie in my code or in my hard disk? How to handle this?
perror prints whatever is in errno as a descriptive string. errno gets set to an error code whenever a system call has an error return. But, if a system call DOESN'T fail, errno doesn't get modified and will continue to contain whatever it contained before. Now if fread returns 0, that means that either there was an error OR you reached the end of the file. In the latter case, errno is not set and might contain any random garbage from before.
So in this case, the "Bad file descriptor" message you're getting probably just means there hasn't been an error at all. You should be checking ferror(fp) to see if an error has occurred.
You seem to be mixing text and binary modes when reading the file.
Normally when you use fread you read from a binary file i.e. fread reads a number of bytes matching the buffer size but you seem to be opening the file in text mode (r+). ftell doesn't work reliably on files opened in text mode because newlines are treated differently than other characters.
Open the file in binary mode (untranslated) instead:
FILE *fp = fopen("sorted_hits", "rb+");
If that's really what your loop looks like, my guess would be that you're probably getting a more or less spurious error because your process is just running out of memory because your loop is leaking it so badly (calling malloc every iteration of your loop, but no matching call to free anywhere).
It's also possible (but a lot less likely) that you're running into a little problem from your (common but nearly always incorrect) use of while (!feof(fp)).
Your all to printf also gives undefined behavior because you've mismatched the conversion and the type (though on many current systems it's irrelevant because long and int are the same size).
Fixing those may or may not remove the problem you've observed, but at least if you still see it, you'll have narrowed down the possibilities of what may be causing the problem.
int main() {
FILE *fp = fopen("sorted_hits", "r+");
int buffer;
while(0 != fread(&buffer, sizeof(int), 1, fp))
; // read file but ignore contents.
if (ferror(fp)) {
printf("At file: %ld\n", ftell(fp));
perror("read error: ");
}
}

Resources