Cannot use fclose on output stream, input stream is fine - c

Whenever I run my program with fclose(outputFile); at the very end, I get an error. glibc detected...corrupted double-linked list
The confusing thing about this though, is that I have fclose(inputFile); directly above it and it works fine. Any suggestions?
FILE* inputFile = fopen(fileName, "r");
if (inputFile == NULL)
{
printf("inputFile did not open correctly.\n");
exit(0);
}
FILE* outputFile = fopen("output.txt", "wb");
if (outputFile == NULL)
{
printf("outputFile did not open correctly.\n");
exit(0);
}
/* ... read in inputFile ... */
/* ... some fprintf's to outputFile ... */
fclose(inputFile);
fclose(outputFile);

The problem is likely located in this section:
/* ... read in inputFile ... */
You've got some code there that corrupts the heap. Overflowing an array is the typical cause. Heap corruption is rarely detected at the time the corruption happens. Only later, when some code allocates or releases memory and has some basic heap health verification built in. Like fclose() does.

To detect exactly where your code is corrupting the heap, if you are running on Linux, you should use valgrind. It's easy to use:
valgrind ./myprog arguments ...
and will give you a stack trace from the exact point where a bad read or write occurs.
Valgrind is available from major Linux distributions or you can build from source.

Related

why the program give "Segmentation fault" after 1018 try

I'm writing a C code to open txt file and read two lines on it then print the value
it worked for 1018 time then it gives "Segmentation fault"
I've tried to flush the buffer but it don't work
while(running) {
i = 0;
if ((fptr = fopen("pwm.txt", "r")) == NULL) {
printf("Error! File cannot be opened.");
// Program exits if the file pointer returns NULL.
exit(1);
}
fptr = fopen("pwm.txt","r");
while (fgets(line,sizeof(line), fptr)){
ppp[i]=atoi(line);
i++;
}
fclose(fptr);
printf("%d %d\n",ppp[0],ppp[1]);
rc_servo_send_pulse_us(ch, 2000);
rc_usleep(1000000/frequency_hz);
}
Actually, the file-opening is the likely culprit: On e.g. Linux there's a limit to how many files you can have open, and it typically defaults to 1024. A few files are used for other things, and your program probably uses some other file-handles elsewhere, leaving only around 1018 left over.
So when you open the file twice, you leak the file handle from the first fopen call, and then your second fopen call will fail and give you a NULL pointer in return. And since you don't check for NULL the second time you attempt to use this NULL pointer and have a crash.
Simple solution: Remove the second and unchecked call to fopen.

c code popen error "can not allocate memory" only when running in virtual machine

I wrote a program using c, which I use popen to open a pipe and execute a command line.
This program works fine in host, but when I run the program in vbox, ubuntu12.04, error:can not allocate memory displayed.
My code is:
FILE *fp;
char path[100];
/* Open the command for reading. */
fp = popen(pdpcall, "r");
if (fp == NULL) {
printf("Error opening pipe pdpcall: %s\n",strerror(errno));
pclose(fp);
exit(0);
}
/* Read the output a line at a time - output it. */
while (fgets(path, sizeof(path)-1, fp) != NULL) {
if(strncmp(decision,path,strlen(decision))==0)
{
pclose(fp);
return 1;
}
I also try a test program only have popen in VM, it works fine. But can not work within my program.
I guess the reason is that in my program, I use a lot of malloc, and might not free. The memory in Vbox is smaller than host, so there are memories errors.
But I change the vbox memory from 512m to 2G, it still the same errors.
Is there any other problems within VM. How to solve this problems.

C - writing buffer into a file then FREEing the buffer cause segfault

I'm writing a buffer into a binary file. Code is as in the following :
FILE *outwav = fopen(outwav_path, "wb");
if(!outwav)
{
fprintf(stderr, "Can't open file %s for writing.\n", outwav_path);
exit(1);
}
[...]
//Create sample buffer
short *samples = malloc((loopcount*(blockamount-looppos)+looppos) << 5);
if(!samples)
{
fprintf(stderr, "Error : Can't allocate memory.\n");
exit(1);
}
[...]
fwrite(samples, 2, 16*k, outwav); //write samplebuffer to file
fflush(outwav);
fclose(outwav);
free(samples);
The last free() call causes me random segfaults.
After several headaches I thought it was probably because the fwrite call would execute only after a delay, and then it would read freed memory. So I added the fflush call, yet, the problem STILL occurs.
The only way to get rid of it is to not free the memory and let the OS do it for me. This is supposed to be bad practice though, so I'd rather ask if there is no better solution.
Before anyone asks, yes I check that the file is opened correctly, and yes I test that the memory is allocated properly, and no, I don't touch the returned pointers in any way.
Once fwrite returns you are free to do whatever you want with the buffer. You can remove the fflush call.
It sounds like a buffer overflow error in a totally unrelated part of the program is writing over the book-keeping information that free needs to do its work. Run your program under a tool like valgrind to find out if this is the problem and to find the part of the program that has a buffer overflow.

segfault during fclose()

fclose() is causing a segfault. I have :
char buffer[L_tmpnam];
char *pipeName = tmpnam(buffer);
FILE *pipeFD = fopen(pipeName, "w"); // open for writing
...
...
...
fclose(pipeFD);
I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN.
Any ideas why this is causing a segfault?
Thanks,
Hristo
Pass pipeFD to fclose. fclose closes the file by file handle FILE* not filename char*. With C (unlike C++) you can do implicit type conversions of pointer types (in this case char* to FILE*), so that's where the bug comes from.
Check if pepeFD is non NULL before calling fclose.
Edit: You confirmed that the error was due to fopen failing, you need to check the error like so:
pipeFD = fopen(pipeName, "w");
if (pipeFD == NULL)
{
perror ("The following error occurred");
}
else
{
fclose (pipeFD);
}
Example output:
The following error occurred: No such file or directory
A crash in fclose implies the FILE * passed to it has been corrupted somehow. This can happen if the pointer itself is corrupted (check in your debugger to make sure it has the same value at the fclose as was returned by the fopen), or if the FILE data structure gets corrupted by some random pointer write or buffer overflow somewhere.
You could try using valgrind or some other memory corruption checker to see if it can tell you anything. Or use a data breakpoint in your debugger on the address of the pipeFD variable. Using a data breakpoint on the FILE itself is tricky as its multiple words, and is modified by normal file i/o operations.
You should close pipeFD instead of pipeName.

fread example from C++ Reference

I often use the website www.cplusplus.com as a reference when writing C code.
I was reading the example cited on the page for fread and had a question.
As an example they post:
/* fread example: read a complete file */
#include <stdio.h>
#include <stdlib.h>
int main () {
FILE * pFile;
long lSize;
char * buffer;
size_t result;
pFile = fopen ( "myfile.bin" , "rb" );
if (pFile==NULL) {fputs ("File error",stderr); exit (1);}
// obtain file size:
fseek (pFile , 0 , SEEK_END);
lSize = ftell (pFile);
rewind (pFile);
// allocate memory to contain the whole file:
buffer = (char*) malloc (sizeof(char)*lSize);
if (buffer == NULL) {fputs ("Memory error",stderr); exit (2);}
// copy the file into the buffer:
result = fread (buffer,1,lSize,pFile);
if (result != lSize) {fputs ("Reading error",stderr); exit (3);}
/* the whole file is now loaded in the memory buffer. */
// terminate
fclose (pFile);
free (buffer);
return 0;
}
It seems to me that that if result != lSize, then free(buffer) will never get called. Would this be a memory leak in this example?
I have always thought the examples on their site are of a very high quality. Perhaps I am not understanding correctly?
It wouldn't be a memory leak in this example, because terminating the program (by calling exit()) frees all memory associated with it.
However, it would be a memory leak if you used this piece of code as a subroutine and called something like return 1; in place of exit().
Technically, yes it is a memory leak. But any memory allocated by a process is automatically freed when that process terminates, so in this example the calls to free (and fclose) are not really required.
In a more complex program, this would probably be a real problem. The missing free would create a memory leak and the missing fclose would cause a resource leak.
The operating system cleans up any unfreed memory by a process when that process closes. At least, modern operating systems do.
If the program were not exiting at the point result != lSize, that is, it continued with some other path of execution, then yes - it is a guaranteed memory leak.
There are two possible paths.
(1) result != lSize - in this case, exit(0) is called. This kills the process and the operating system will clean up the memory.
(2) result == lsize - in this case, the buffer is explicitly freed, but return is called right afterwards so the free is mostly just good style because this also kills the process and the operating system will, again, clean up the memory.
So in this simple case, there is no memory leak. But it is probably a good practice to just make sure you're freeing any memory you've allocated in any application you write. Getting into this habit will prevent many headaches for you in the future.
As to possible memory leakage, other's have already answered that question. A while ago, I posted a variation of the given code which should handle all possible error conditions correctly:
fsize()
fget_contents()

Resources