This question already has answers here:
Why does printf not flush after the call unless a newline is in the format string?
(10 answers)
Closed 5 years ago.
I have the following snippet of code at the beginning of my program:
printf("Starting extraction of file %s \n", tarName);
// Open the tarFile
FILE* tarFile = fopen(tarName, "r");
if(tarFile == NULL) return EXIT_FAILURE;
// Read nFiles
printf("Reading header...");
...
When I execute it from the terminal I get the following output:
Starting extraction of file test.mytar
And then the program freezes, apparently never reaching the second printf.
test.mytar is an existing file in the same folder as my executable, and this is the same folder from where I am executing the terminal.
The file was created by me byte a byte, so it could possibly be violating file conventions I am not aware of.
What could possibly be going on here?
As pointed out in the comments, two things may happen.
a) the fopen fails (IO error, permission denied, missing file, ...). To know the exact cause, you need to print the errno (or GetLastError() on Windows) :
if(tarFile == NULL) {
printf("%s\n", strerror(errno));
return EXIT_FAILURE;
}
b) the fopen succeeds but printf("Reading header..."); does not show up anything because the message is buffered and not yet printed. To correct this, you can generally add a '\n' at the end of the message.
Related
Trying to read from a file to use in a small game I've created. I'm using the fgets function. It's returning a Segmentation Fault, not sure why.
The file it's reading, just contains "20 10" in a txt file as this is the map size.
My readfile function is shown below
if (argc == 2) {
f = fopen("map.txt", "r");
if (NULL == f) {
printf("File cannot be opened.");
}
while (fgets(fileRead, 50, f) != NULL) {
printf("%s", fileRead);
}
fclose(f);
}
The if (argc == 2) can be ignored, this is just to make this section run, as I'm modifying a file so just running this function by satisfying that if statement.
I am fairly new to C, so apologies if I'm missing something minor. Worth noting I'm programming in C89 and using the -Wall -ansi -pedantic compile options, as this is University work and the tutors want us to do C89.
EDIT:
char userInput, fileRead[50];
FILE* f;
Declaration of variables.
Assuming that your problem is indeed in your posted code and not somewhere else in the program, then I believe that your problem is caused by the following issue:
After calling fopen, you check the return value of the function immediately afterwards, to verify that it succeeded. However, if it doesn't succeed and it returns NULL, all you do is print an error message to stdout but continue execution as if it succeeded. This will cause fgets to be called with NULL as the stream argument, which will invoke undefined behavior and probably cause your segmentation fault.
In the comments section, you raised the following objection to this explanation:
However it doesn't print the error message anyway and still segmentation faults, so I think the problem isn't here?
This objection of yours is flawed, for the following reason:
When a segmentation fault occurs, execution of the program is immediately halted. The content of the output buffer is not flushed. This means that output can get lost when a segmentation fault happens. This is probably what is happening in your case.
If you want to ensure that the output actually gets printed even in the case of a segmentation fault, you should flush the output buffer by calling fflush( stdout ); immediately after the print statement. Alternatively, you can print to stderr instead of stdout. In constrast to stdout, the stream stderr is unbuffered by default, so that it does not have this problem.
You can test whether my suspicion is correct by changing the line
printf("File cannot be opened.");
to
printf("File cannot be opened.");
fflush( stdout );
or to:
fprintf( stderr, "File cannot be opened." );
If the error message now gets printed, then this probably means that my suspicion was correct.
In any case, I recommend that you change the lines
if (NULL == f) {
printf("File cannot be opened.");
}
to the following:
if (NULL == f) {
fprintf( stderr, "File cannot be opened." );
exit( EXIT_FAILURE );
}
That way, the program will exit immediately if an occur occurs, instead of continuing execution.
Please note that the code posted above requires you to #include <stdlib.h>.
I had read that both perror() and printf() write to the terminal screen. But perror() writes to stderr while printf() writes to stdout. So, to print errors why is perror() used when printf() can do it.
printf() cannot write to stderr. fprintf() can. perror() always does.
There is no requirement that writing to either stdout or stderr writes to a terminal screen - that is up to the implementation (since not all systems even have a terminal). There is also no requirement that writing to stdout and stderr results in writing to the same device (e.g. one can be redirected to a file, while the other is redirected to a pipe).
perror() will be implemented with built-in knowledge of the meanings of error codes, represented by the static errno, which is used by various functions in the standard library to report error conditions. The meanings of particular values are implementation defined (i.e. they vary between compilers and libraries).
Because there could be configurations where you want stderr printed to the console but the other output not printed at all (for example, to remove verbosity). In other cases you may need to redirect stderr to write to a file, this is useful when you are in production and that file can be used to understand what went wrong on a remote computer you can't debug yourself.
In general, you gain more control on how console outputs are treated depending on their type.
See this answer to understand how you can do stream redirection in code.
Or, see this link on how you can force stream redirection to file or ignore a stream on an already compiled program (while invoking it in bash)
In addition to other answers, you might use fprintf(3) on stderr and errno(3) with strerror(3) like
fprintf(stderr, "something wrong: %s\n", strerror(errno));
On GNU libc systems (many Linux systems), you could use instead %m conversion specifier instead:
fprintf(stderr, "something wrong: %m\n");
You conventionally should output error messages to stderr (see stderr(3)); see also syslog(3) to use system logging.
Don't forget to end the format string with \n since stderr is often line buffered (but sometimes not) or else use fflush(3)
For example, you might want to show both the error, the filename and the current directory on fopen failure:
char* filename = somefilepath();
assert (filename != NULL);
FILE* f = fopen(filename, "r");
if (!f) {
int e = errno; // keep errno, it could be later overwritten
if (filename[0] == '/') /// absolute path
fprintf(stderr, "failed to open %s : %s\n", filename, strerror(e));
else { // we also try to show the current directory since relative path
char dirbuf[128];
memset (dirbuf, 0, sizeof(dirbuf));
if (getcwd(dirbuf, sizeof(dirbuf)-1))
fprintf(stderr, "failed to open %s in %s : %s\n",
filename, dirbuf, sterror(e));
else // unlikely case when getcwd failed so errno overwritten
fprintf(stderr, "failed to open %s here : %s\n",
filename, sterror(e));
};
exit(EXIT_FAILURE); // in all cases when fopen failed
}
Remember that errno could be overwritten by many failures (so we store it in e, in the unlikely case that getcwd fails and overwrite errno).
If your program is a deamon (e.g. has called daemon(3)) you'll better use system log (i.e. call openlog(3) after calling daemon) since daemon can redirect stderr to /dev/null
There are three standard stream stdin stdout stderr. You can refer to know what is important of different stream.
For error messages and diagnostics ,stderr is used , to print on stderr
Perror is used. printf can not do that. Perror is also used to handle errors from system call
fd = open (pathname, flags, mode);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
You can refer more about this in book The linux programming interface
void perror(const char *s)
Perror prints message in following sequence :
argument of s , a colon , a space , a short message concerning error whose error code currently in errnoand newline
In standard C if s is null pointer than only message will be printed . other things will be ignored
To understand more you can also refer page 332 of The complete reference C
A big advantage of using perror():
It is sometimes very useful to redirect stdout into /dev/null to only have access to errors since the verbosity of stdout might hide the errors that we need to fix.
perror
The general purpose of the function is to halt the execution process due to an error. The error message produced by perror is platform-depend. You can also print your own error message also.
printf
The general purpose of the function is to print message user defined and continue the execution.
Apparently, there's no data regarding my question (I tried searching it out here but none of the threads I've read answered my doubt). Here it is: I'm trying desperately to figure out how can I put a correct path into the fprintf function and none of my tries have been successful. Here's the program:
#include <stdio.h>
#include <stdlib.h>
int main(){
FILE *fp = NULL;
//opening the file
fp = fopen("C:/Users/User1/Desktop/myfile.txt", "w+");
//if there's an error when opening the file, the program shuts down
if(fp == NULL){
printf("error");
exit(EXIT_FAILURE);
}
//print something on the file the program just opened (or created if not already existent)
fprintf(fp, "to C or not to C, that is the question");
//closing the file
fclose(fp);
//end of main function
return 0;
}
My question is: why my program always shuts down? What am I doing wrong? It's just a Windows problem (I saw that, on the User1 folder icon, there's a lock, could be a permission denied thing?) or I'm just putting the path in an incorrect way? I tried to use a string to save the path, I tried to change the opening mode, I even tried to disable all the antiviruses, antimalwares and firewalls I have installed on my computer but nothing, the program still doesn't create the file where I want it.
P.S. Sorry for bad English.
P.P.S. Sorry if a similar question has been already posted, I didn't manage to find it.
fp = fopen("C:\Users\User1\Desktop\myfile.txt", "w+");
The character \ is the escape character in C. You must escape it:
fp = fopen("C:\\Users\\User1\\Desktop\\myfile.txt", "w+");
Even better, windows now supports the / directory separator. So you can write:
fp = fopen("C:/Users/User1/Desktop/myfile.txt", "w+");
With no need to escape the path.
Reference:
MSDN fopen, specifically the Remaks section
Use perror() to have the Operating System help you determine the cause of failure.
#define FILENAME "C:/Users/User1/Desktop/myfile.txt"
fp = fopen(FILENAME, "w+");
// report and shut down on error
if (fp == NULL) {
perror(FILENAME);
exit(EXIT_FAILURE);
}
My c code is
size_t n=0;
char *str = (char *)malloc(sizeof(char)* 1000)
FILE *fp = popen(" cat /conf/a.txt" ,"r" );
// my program comes in this function only if /conf/a.txt exists
getline(&str, &n, fp); <== crash if fp is null
My debugger shows that sometimes i get fp as null and hence my program crashes at line 6 . Sometimes i get valid pointere and it passes .
What is it , that controls this behaviour . I can't find problem in above code . Some help is appreciated .
I know I can have a check of fp==null but that is not my question . I just want to know , knowing that file is definitely present why is fp coming as null in some scenarios .
man of popen says The popen() function returns NULL if the fork(2) or pipe(2) calls fail, or if it cannot allocate memory.
i checked after crash and system is having enough memory ..
strerror and errno are your friends.
Example from the C++ references linked:
/* strerror example : error list */
#include <stdio.h>
#include <string.h>
#include <errno.h>
int main ()
{
FILE * pFile;
pFile = fopen ("unexist.ent","r");
if (pFile == NULL)
printf ("Error opening file unexist.ent: %s\n",strerror(errno));
return 0;
}
Example output:
Error opening file unexist.ent: No such file or directory
Using this method of checking errno after a failure will allow you to better diagnose your issue as it will print a more specific error message. There are many reasons a file can't be opened: no permission, bad path, file is locked from another process, IO errors during reading, etc. Ultimately your question seems to be asking why the open failed. Using these tools will answer that for you.
Update For Tag Change:
I've referenced and linked to C++ resources, but sterror and errno are both available in C as well by including errno.h.
popen() also fails if too many file handles are open in one process. I had one case in a server app, that was scanning one directory periodically for files.There was one scenario were no fclose call was made. So after some hours we reached the limit of 1024 open file handles on from that moment consecutive popen() calls would fail.
You can use ps -aux | grep {PROC_NAME} to retrieve the process id.
Then use sudo ls -l /proc/{PROC_ID}/fd to see the list of open file descriptors.
In my software I have to read multiple txt databases in a serial way, so I read the first, then I do something with the info I got from that file, than I open another one to write and so on.
Sometimes I got an error on an opening OR creation of a file, and then I got errors on all the following opening/creation, which uses different functions, different variables, different files.
So for example I call the function below, which uses two files, and I got an error "* error while opening file -%s- ..\n", then all the other fopen() in my code goes wrong!
This is an example of code for one single file:
FILE *filea;
if((filea=fopen(databaseTmp, "rb"))==NULL) {
printf("* error while opening file -%s- ..\n",databaseTmp);
fclose (filea);
printf("---------- createDatabaseBackup ----------\n");
return -1;
}
int emptyFolder=1;
FILE *fileb;
if((fileb=fopen(databaseBackup, "ab"))==NULL) {
printf("* error while opening file -%s- ..\n",databaseBackup);
fclose (fileb);
printf("---------- createDatabaseBackup ----------\n");
return -1;
}
else {
int i=0;
char c[500]="";
for (i=0;fgets(c,500,filea);i++) {
fprintf(fileb,"%s",c);
emptyFolder=0;
}
}
fclose(fileb);
fclose(filea);
There is an upper limit on the number of open handles for a given process. May be you have a handle leak in your program ?
Error while creating a file typically means you don't have access permission to the parent folder .
Those error log messages belong to your program . You can enhance it further. There is an errnum set by the os as fopen is essentially a system call. You can print that error number and get more info about your issue.
If fopen returned NULL, the file wasn't opened, so there's no point in trying to fclose it.
You should check the return value of fgets besides whether it is 0 or not. If it reads 500 characters and the buffer is not null-terminated, the fprintf will attempt to write more characters than is allocated for c