segfault during fclose() - c

fclose() is causing a segfault. I have :
char buffer[L_tmpnam];
char *pipeName = tmpnam(buffer);
FILE *pipeFD = fopen(pipeName, "w"); // open for writing
...
...
...
fclose(pipeFD);
I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN.
Any ideas why this is causing a segfault?
Thanks,
Hristo

Pass pipeFD to fclose. fclose closes the file by file handle FILE* not filename char*. With C (unlike C++) you can do implicit type conversions of pointer types (in this case char* to FILE*), so that's where the bug comes from.
Check if pepeFD is non NULL before calling fclose.
Edit: You confirmed that the error was due to fopen failing, you need to check the error like so:
pipeFD = fopen(pipeName, "w");
if (pipeFD == NULL)
{
perror ("The following error occurred");
}
else
{
fclose (pipeFD);
}
Example output:
The following error occurred: No such file or directory

A crash in fclose implies the FILE * passed to it has been corrupted somehow. This can happen if the pointer itself is corrupted (check in your debugger to make sure it has the same value at the fclose as was returned by the fopen), or if the FILE data structure gets corrupted by some random pointer write or buffer overflow somewhere.
You could try using valgrind or some other memory corruption checker to see if it can tell you anything. Or use a data breakpoint in your debugger on the address of the pipeFD variable. Using a data breakpoint on the FILE itself is tricky as its multiple words, and is modified by normal file i/o operations.

You should close pipeFD instead of pipeName.

Related

why the program give "Segmentation fault" after 1018 try

I'm writing a C code to open txt file and read two lines on it then print the value
it worked for 1018 time then it gives "Segmentation fault"
I've tried to flush the buffer but it don't work
while(running) {
i = 0;
if ((fptr = fopen("pwm.txt", "r")) == NULL) {
printf("Error! File cannot be opened.");
// Program exits if the file pointer returns NULL.
exit(1);
}
fptr = fopen("pwm.txt","r");
while (fgets(line,sizeof(line), fptr)){
ppp[i]=atoi(line);
i++;
}
fclose(fptr);
printf("%d %d\n",ppp[0],ppp[1]);
rc_servo_send_pulse_us(ch, 2000);
rc_usleep(1000000/frequency_hz);
}
Actually, the file-opening is the likely culprit: On e.g. Linux there's a limit to how many files you can have open, and it typically defaults to 1024. A few files are used for other things, and your program probably uses some other file-handles elsewhere, leaving only around 1018 left over.
So when you open the file twice, you leak the file handle from the first fopen call, and then your second fopen call will fail and give you a NULL pointer in return. And since you don't check for NULL the second time you attempt to use this NULL pointer and have a crash.
Simple solution: Remove the second and unchecked call to fopen.

Checking the return value of fclose() [duplicate]

Is it required to check the return value of fclose? If we have successfully opened a file, what are the chances that it may fail to close?
When you fwrite to a file, it may not actually write anything, it may stay in a buffer (inside the FILE object). Calling fflush would actually write it to disk. That operation may fail, for example if you just ran out of disk space, or there is some other I/O error.
fclose flushes the buffers implicitly too, so it may fail for the same reasons.
From comp.lang.c:
The fclose() call can fail, and should
be error-checked just as assiduously
as all the other file operations.
Sounds pedantic, right? Wrong. In a
former life, my company's product
managed to destroy a customer's data
by omitting a check for failure when
closing a file. The sequence went
something like (paraphrased):
stream = fopen(tempfile, "w");
if (stream == NULL) ...
while (more_to_write)
if (fwrite(buffer, 1, buflen, stream) != buflen) ...
fclose (stream);
/* The new version has been written successfully. Delete
* the old one and rename.
*/
remove (realfile);
rename (tempfile, realfile);
Of course, what happened was that
fclose() ran out of disk space trying
to write the last couple blocks of
data, so the `tempfile' was truncated
and unusable. And since the fclose()
failure wasn't detected, the program
went right ahead and destroyed the
best extant version of the data in
favor of the damaged version. And, as
Murphy would have it, the victim in
this particular incident was the
person in charge of the customer's
department, the person with authority
to buy more of our product or replace
it with a competitor's product --
and, natch, a person who was already
unhappy with us for other reasons.
It'd be a stretch to ascribe all of
the ensuing misery to this single
omission, but it may be worth pointing
out that both the customer and my
former company have since vanished
from the corporate ecology.
CHECK THOSE FAILURE CODES!
You could (and should) report the error, but in a sense, the stream is still closed:
After the call to fclose(), any use of stream results in undefined behavior.
fclose() will flush any unwritten output (via fflush()) before returning, so the error results from the underlying write() won't be reported at fwrite() or fprintf() time, but when you do the fclose(). As a result, any error that write() or fflush() can generate can be generated by fclose().
fclose() will also call close() which can generate errors on NFS clients, where the changed file isn't actually uploaded to the remote server until close() time. If the NFS server crashed, then the close() will fail, and thus fclose() will fail as well. This might be true of other networked filesystems.
You must ALWAYS check result of fclose()
Let's say you're generating data. You have old data that you fread() from a file, and then do some processing on the data, generate more data, and then write it to a new file. You are careful to not overwrite the old file because you know that trying to create new file might fail, and you would like to keep your old data in that case (some data is better than no data). After finishing all the fwrite()s, which all succeed (because you meticulously checked the return value from fwrite()), you fclose() the file. Then, you rename() your just-written file and overwrite the old file.
If fclose() failed because of write error (disk full?), you just overwrote your last good file with something that might be junk. Oops.
So, if it is critical, you should check the return value of fclose().
In terms of code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *ifp = fopen("in.dat", "rb");
FILE *ofp = fopen("out.dat", "wb");
char buf[BUFSIZ];
size_t n;
int success = 1;
if (ifp == NULL) {
fprintf(stderr, "error opening in.dat\n");
perror("in.dat");
return EXIT_FAILURE;
}
if (ofp == NULL) {
fclose(ifp);
fprintf(stderr, "error opening out.dat\n");
perror("out.dat");
return EXIT_FAILURE;
}
while ((n = fread(buf, 1, sizeof buf, ifp)) > 0) {
size_t nw;
if ((nw = fwrite(buf, 1, n, ofp)) != n) {
fprintf(stderr, "error writing, wrote %lu bytes instead of %lu\n",
(unsigned long)n,
(unsigned long)nw);
fclose(ifp);
fclose(ofp);
return EXIT_FAILURE;
}
}
if (ferror(ifp)) {
fprintf(stderr, "ferror on ifp\n");
fclose(ofp);
fclose(ifp);
return EXIT_FAILURE;
}
#ifdef MAYLOSE_DATA
fclose(ofp);
fclose(ifp);
rename("out.dat", "in.dat"); /* Oops, may lose data */
#else
if (fclose(ofp) == EOF) {
perror("out.dat");
success = 0;
}
if (fclose(ifp) == EOF) {
perror("in.dat");
success = 0;
}
if (success) {
rename("out.dat", "in.dat"); /* Good */
}
#endif
return EXIT_SUCCESS;
}
In the above code, we have been careful about fopen(), fwrite(), and fread(), but even then, not checking fclose() may result in data loss (when compiled with MAYLOSE_DATA defined).
One reason fclose can fail is if there is any data still buffered and the implicit fflush fails. What I recommend is to always call fflush and do any error handling there.
I've seen many times fclose() returning non-zero.
And on careful examination found out that the actual problem was with the write and not fclose.
Since the stuff being written are being buffered before actual write happens and when an fclose() is called all the buffer is flushed. So any problem in writing of the buffered suff, say like disk full, appears during fclose(). As David Yell says, to write a bullet proof application you need to consider the return value of fclose().
The fclose man page cites that it may fail for any of the reasons that close or fflush may fail.
Quoting:
The close() system call will fail if:
[EBADF] fildes is not a valid, active file descriptor.
[EINTR] Its execution was interrupted by a signal.
[EIO] A previously-uncommitted write(2) encountered an
input/output error.
fflush can fail for reasons that write() would fail, basically in the event that you can't actually write to/save the file.
In a sense, closing a file never fails: errors are returned if a pending write operation failed, but the stream will be closed.
To avoid problems and ensure (as far as you can from a C program), I suggest you:
Properly handle errors returned by fwrite().
Call fflush() before closing the stream. Do remember to check for errors returned by fflush().
If, in the context of your application, you can think of something useful to do if fclose() fails, then test the return value. If you can't, don't.

C segmention fault when opening file

This seems to be a really simple one, but I can't figure it out after not touching C programming in four years.
I was trying to open a file in main()
int main(int argc, const char * argv[])
{
FILE * fp = fopen("data.txt","r");
...
return(0)
}
The program compiled, but when I tried to run it in gdb, the following error occurs.
Program received signal SIGSEGV, Segmentation fault.
0x00000000004016c6 in main ()
when the program is trying to open the file "data.txt". What could cause the error? Thanks!
I suspect your error lies in this bit of code:
...
In other words, there's nothing in the other code shown that appears to be wrong.
The most likely case is that the file doesn't exist, or it doesn't exist in the directory where the program is running (which, if you're in an IDE, usually turns out to be somewhere other than you think it is).
And, in that case, you're getting NULL from the fopen, then later using it, something like:
FILE *fp = fopen ("no_such_file.txt", "r");
int ch = fgetc (fp);
You should generally check return values from all functions that use them to indicate success or failure:
#include <stdio.h>
int main (void) {
FILE *fp = fopen ("no_such_file.txt", "r");
if (fp == NULL) {
perror ("Opening no_such_file.txt");
return 1;
}
// You can use fp here.
puts ("It worked.");
fclose (fp);
return 0;
}
What could cause the error?
The most likely cause of the error is that the file data.txt could not be opened (e.g. because it doesn't exist, or it's not in the current directory, or your program doesn't have permission to read it). That will cause fopen() to return NULL. Then if your code (in the ... section) tries to call fread() or fgets() or whatever and passes in the NULL pointer, that will cause a crash. You need to check the value returned by fopen() to make sure it is non-NULL before trying to use it.

fclose() causes Segmentation Fault

I've been trying simple file handling in C and I wanted to make sure that the file can be accessed tried using this
#include<stdio.h>
main()
{
CheckFile();
}
int CheckFile()
{
int checkfile=0;
FILE *fp1;
fp1 = fopen("users.sav","r");
if(fp1==NULL)
{
fopen("users.sav","w");
fclose(fp1);
}
if(checkfile!=0)printf("\nERROR ACCESSING FILE!\nNow exiting program with exit code: %d\n",checkfile);exit(1);
return 0;
}
then it displays
Segmentation fault (core dumped)
but it doesn't segfault if the file already exists beforehand (e.g. when i created it manually or when i run the program the second time)
Please help. I need this for our final project due in a week and I haven't gotten the hang of files and pointers yet.
I'm using "gcc (Ubuntu/Linaro 4.8.1-10ubuntu9) 4.8.1"
P.s
I saw this in another question
There's no guarantee in your original code that the fopen is actually working, in which case it will return NULL and the fclose will not be defined behaviour.
So how exactly do I check if it worked?
That's normal, when you call fclose(fp1) when fp1 is NULL.
BTW
fopen("users.sav","w");
is useless because you don't assign the return value to a file pointer. That means the users.sav file will be opened for writing, but you will never be able to write anything in it .
fopen returns a FILE pointer. It will return NULL and set the global errno to indicate the error. If you want to check the errno, you have to check if after you check if fopen returned NULL.
if (fp1 == NULL)
{
printf("fopen failed, errno = %d\n", errno);
}
Otherwise, you may get an errno from something else, not necessarily your fopen call. Also include errno.h. You also don't need to call fopen("users.sav","w"); again. You aren't reassigning the pointer nor checking it again.
I don't see a reason to call fclose here since if fopen returns NULL, there isn't anything to close. That is probably the reason for your seg fault. You are trying to close a null pointer. More information on fopen failures.
Another comment on your code. If you are going to return an int from CheckFile, it should probably not be 0 on fail. I would return -1 to indicate an error. Better yet, you could return the global errno. Also, main should be int main() and you should return 0; at the end. I don't particularly care for your naming scheme of CheckFile. In C, check_file or camelCase of checkFile would be better.
In CheckFile, your one line if statement could be formatted and work more properly if you formatted it on multiple lines. It doesn't do what you think it does currently:
if(checkfile!=0)
{
printf("\nERROR ACCESSING FILE!\nNow exiting program with exit code: %d\n", checkfile);
exit(1);
}
Also, checkfile is never set anywhere in your code.. other than zero. So the code in the if statement will not execute, period.
I'm not really sure what you're trying to do, but the immediate problem is here:
if(fp1==NULL)
fclose(fp1);
After asserting that fp1 is NULL, you're trying to call close on the null pointer, which will cause a segmentation fault.
If all you want to do is verify that the file exists, try something like What's the best way to check if a file exists in C? (cross platform)
The man page of fclose says -
The behaviour of fclose() is undefined if the stream parameter is an
illegal pointer, or is a descriptor already passed to a previous
invocation of fclose().
The error is in the if block in your code.
if(fp1==NULL)
{
fopen("users.sav","w");
fclose(fp1); // passing NULL to fclose invokes undefined behaviour
}
Another unrelated problem:
This line is probably not what you want:
if(checkfile!=0)printf("\nERROR ACCESSING FILE!\nNow exiting program with exit code: %d\n",checkfile);exit(1);
If we write it correctly formatted the error becomes obvious:
if (checkfile != 0)
printf("\nERROR ACCESSING FILE!\nNow exiting program with exit code: %d\n",checkfile);
exit(1);
return 0 ;
Actually we will get to exit(1) even if checkfile is zero.
You probably want this:
if (checkfile != 0)
{
printf("\nERROR ACCESSING FILE!\nNow exiting program with exit code: %d\n",checkfile);
exit(1);
}
return 0 ;
Conclusion: format your code correctly and many errors will suddenly look obvious.

fclose return value check

Is it required to check the return value of fclose? If we have successfully opened a file, what are the chances that it may fail to close?
When you fwrite to a file, it may not actually write anything, it may stay in a buffer (inside the FILE object). Calling fflush would actually write it to disk. That operation may fail, for example if you just ran out of disk space, or there is some other I/O error.
fclose flushes the buffers implicitly too, so it may fail for the same reasons.
From comp.lang.c:
The fclose() call can fail, and should
be error-checked just as assiduously
as all the other file operations.
Sounds pedantic, right? Wrong. In a
former life, my company's product
managed to destroy a customer's data
by omitting a check for failure when
closing a file. The sequence went
something like (paraphrased):
stream = fopen(tempfile, "w");
if (stream == NULL) ...
while (more_to_write)
if (fwrite(buffer, 1, buflen, stream) != buflen) ...
fclose (stream);
/* The new version has been written successfully. Delete
* the old one and rename.
*/
remove (realfile);
rename (tempfile, realfile);
Of course, what happened was that
fclose() ran out of disk space trying
to write the last couple blocks of
data, so the `tempfile' was truncated
and unusable. And since the fclose()
failure wasn't detected, the program
went right ahead and destroyed the
best extant version of the data in
favor of the damaged version. And, as
Murphy would have it, the victim in
this particular incident was the
person in charge of the customer's
department, the person with authority
to buy more of our product or replace
it with a competitor's product --
and, natch, a person who was already
unhappy with us for other reasons.
It'd be a stretch to ascribe all of
the ensuing misery to this single
omission, but it may be worth pointing
out that both the customer and my
former company have since vanished
from the corporate ecology.
CHECK THOSE FAILURE CODES!
You could (and should) report the error, but in a sense, the stream is still closed:
After the call to fclose(), any use of stream results in undefined behavior.
fclose() will flush any unwritten output (via fflush()) before returning, so the error results from the underlying write() won't be reported at fwrite() or fprintf() time, but when you do the fclose(). As a result, any error that write() or fflush() can generate can be generated by fclose().
fclose() will also call close() which can generate errors on NFS clients, where the changed file isn't actually uploaded to the remote server until close() time. If the NFS server crashed, then the close() will fail, and thus fclose() will fail as well. This might be true of other networked filesystems.
You must ALWAYS check result of fclose()
Let's say you're generating data. You have old data that you fread() from a file, and then do some processing on the data, generate more data, and then write it to a new file. You are careful to not overwrite the old file because you know that trying to create new file might fail, and you would like to keep your old data in that case (some data is better than no data). After finishing all the fwrite()s, which all succeed (because you meticulously checked the return value from fwrite()), you fclose() the file. Then, you rename() your just-written file and overwrite the old file.
If fclose() failed because of write error (disk full?), you just overwrote your last good file with something that might be junk. Oops.
So, if it is critical, you should check the return value of fclose().
In terms of code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *ifp = fopen("in.dat", "rb");
FILE *ofp = fopen("out.dat", "wb");
char buf[BUFSIZ];
size_t n;
int success = 1;
if (ifp == NULL) {
fprintf(stderr, "error opening in.dat\n");
perror("in.dat");
return EXIT_FAILURE;
}
if (ofp == NULL) {
fclose(ifp);
fprintf(stderr, "error opening out.dat\n");
perror("out.dat");
return EXIT_FAILURE;
}
while ((n = fread(buf, 1, sizeof buf, ifp)) > 0) {
size_t nw;
if ((nw = fwrite(buf, 1, n, ofp)) != n) {
fprintf(stderr, "error writing, wrote %lu bytes instead of %lu\n",
(unsigned long)n,
(unsigned long)nw);
fclose(ifp);
fclose(ofp);
return EXIT_FAILURE;
}
}
if (ferror(ifp)) {
fprintf(stderr, "ferror on ifp\n");
fclose(ofp);
fclose(ifp);
return EXIT_FAILURE;
}
#ifdef MAYLOSE_DATA
fclose(ofp);
fclose(ifp);
rename("out.dat", "in.dat"); /* Oops, may lose data */
#else
if (fclose(ofp) == EOF) {
perror("out.dat");
success = 0;
}
if (fclose(ifp) == EOF) {
perror("in.dat");
success = 0;
}
if (success) {
rename("out.dat", "in.dat"); /* Good */
}
#endif
return EXIT_SUCCESS;
}
In the above code, we have been careful about fopen(), fwrite(), and fread(), but even then, not checking fclose() may result in data loss (when compiled with MAYLOSE_DATA defined).
One reason fclose can fail is if there is any data still buffered and the implicit fflush fails. What I recommend is to always call fflush and do any error handling there.
I've seen many times fclose() returning non-zero.
And on careful examination found out that the actual problem was with the write and not fclose.
Since the stuff being written are being buffered before actual write happens and when an fclose() is called all the buffer is flushed. So any problem in writing of the buffered suff, say like disk full, appears during fclose(). As David Yell says, to write a bullet proof application you need to consider the return value of fclose().
The fclose man page cites that it may fail for any of the reasons that close or fflush may fail.
Quoting:
The close() system call will fail if:
[EBADF] fildes is not a valid, active file descriptor.
[EINTR] Its execution was interrupted by a signal.
[EIO] A previously-uncommitted write(2) encountered an
input/output error.
fflush can fail for reasons that write() would fail, basically in the event that you can't actually write to/save the file.
In a sense, closing a file never fails: errors are returned if a pending write operation failed, but the stream will be closed.
To avoid problems and ensure (as far as you can from a C program), I suggest you:
Properly handle errors returned by fwrite().
Call fflush() before closing the stream. Do remember to check for errors returned by fflush().
If, in the context of your application, you can think of something useful to do if fclose() fails, then test the return value. If you can't, don't.

Resources