I'm writing a buffer into a binary file. Code is as in the following :
FILE *outwav = fopen(outwav_path, "wb");
if(!outwav)
{
fprintf(stderr, "Can't open file %s for writing.\n", outwav_path);
exit(1);
}
[...]
//Create sample buffer
short *samples = malloc((loopcount*(blockamount-looppos)+looppos) << 5);
if(!samples)
{
fprintf(stderr, "Error : Can't allocate memory.\n");
exit(1);
}
[...]
fwrite(samples, 2, 16*k, outwav); //write samplebuffer to file
fflush(outwav);
fclose(outwav);
free(samples);
The last free() call causes me random segfaults.
After several headaches I thought it was probably because the fwrite call would execute only after a delay, and then it would read freed memory. So I added the fflush call, yet, the problem STILL occurs.
The only way to get rid of it is to not free the memory and let the OS do it for me. This is supposed to be bad practice though, so I'd rather ask if there is no better solution.
Before anyone asks, yes I check that the file is opened correctly, and yes I test that the memory is allocated properly, and no, I don't touch the returned pointers in any way.
Once fwrite returns you are free to do whatever you want with the buffer. You can remove the fflush call.
It sounds like a buffer overflow error in a totally unrelated part of the program is writing over the book-keeping information that free needs to do its work. Run your program under a tool like valgrind to find out if this is the problem and to find the part of the program that has a buffer overflow.
Related
I have a piece of C code where I try to write a buffer into an opened output file.I am getting a segmentation fault when I try to run the code.
if (fwrite(header, record_size, 1, uOutfile) != 1)
{
return 0;
}
The header is a properly populated and I am able to print out the contents of the header.the size of the buffer header is definitely greater than the record_size.Is there anything else worth checking.?Any other reason where fwrite can cause a segfault.Gdbing the problem gave the following output
0x00007ffff6b7d66d in _IO_fwrite (buf=0x726d60, size=16, count=1, fp=0x738820) at iofwrite.c:43
43 iofwrite.c: No such file or directory.
in iofwrite.c
it seems to suggest that the output file has not been created.how ever and ls -l on my directory shows the output file of size 0 bytes.
I would greatly appreciate if someone could throw some light on the problem.
EDIT: Code that opens the file:
outfd = open(out, O_RDWR|O_CREAT|O_TRUNC|O_LARGEFILE, 0664);
if (outfd == -1) {
dagutil_panic("Could not open %s for writing.\n", out);
}
uOutfile = fdopen(outfd, "w");
I don't think there's enough here to know for sure what your problems are, but here are some thoughts:
Show us the code involving your FILE * (uOutFile) and your buffer (header) — we can then see if you're borking memory somewhere between.
Run your code through valgrind: You're getting a segfault, so it could probably catch what you're doing wrong.
In gdb, examine the contents of both header and uOutFile (not just the pointer, but the pointed-to-memory.) (You'll have to use some smarts to figure out if uOutFile looks right, but you should be able to up-or-down determine if header is correct.)
To add to this: my general debug strategy when I get segfaults is:
gdb's backtrace. Tells me where the segfault happened. Usually, this is enough to uncover the dumb thing I did.
Look at the pointers in the vincinity of the crash. Is the pointer correct, and is the pointed-to data correct? (esp. if you see something strange like 0xdeadbeef)
Valgrind Valgrind Valgrind
(2 & 3 are in no particular order.)
fclose() is causing a segfault. I have :
char buffer[L_tmpnam];
char *pipeName = tmpnam(buffer);
FILE *pipeFD = fopen(pipeName, "w"); // open for writing
...
...
...
fclose(pipeFD);
I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN.
Any ideas why this is causing a segfault?
Thanks,
Hristo
Pass pipeFD to fclose. fclose closes the file by file handle FILE* not filename char*. With C (unlike C++) you can do implicit type conversions of pointer types (in this case char* to FILE*), so that's where the bug comes from.
Check if pepeFD is non NULL before calling fclose.
Edit: You confirmed that the error was due to fopen failing, you need to check the error like so:
pipeFD = fopen(pipeName, "w");
if (pipeFD == NULL)
{
perror ("The following error occurred");
}
else
{
fclose (pipeFD);
}
Example output:
The following error occurred: No such file or directory
A crash in fclose implies the FILE * passed to it has been corrupted somehow. This can happen if the pointer itself is corrupted (check in your debugger to make sure it has the same value at the fclose as was returned by the fopen), or if the FILE data structure gets corrupted by some random pointer write or buffer overflow somewhere.
You could try using valgrind or some other memory corruption checker to see if it can tell you anything. Or use a data breakpoint in your debugger on the address of the pipeFD variable. Using a data breakpoint on the FILE itself is tricky as its multiple words, and is modified by normal file i/o operations.
You should close pipeFD instead of pipeName.
I'm taking a networking class at school and am using C/GDB for the first time. Our assignment is to make a webserver that communicates with a client browser. I am well underway and can open files and send them to the client. Everything goes great till I open a very large file and then I seg fault. I'm not a pro at C/GDB so I'm sorry if that is causing me to ask silly questions and not be able to see the solution myself but when I looked at the dumped core I see my seg fault comes here:
if (-1 == (openfd = open(path, O_RDONLY)))
Specifically we are tasked with opening the file and the sending it to the client browser. My Algorithm goes:
Open/Error catch
Read the file into a buffer/Error catch
Send the file
We were also tasked with making sure that the server doesn't crash when SENDING very large files. But my problem seems to be with opening them. I can send all my smaller files just fine. The file in question is 29.5MB.
The whole algorithm is:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
I dunno if it is just a fact that a I am Unix/C novice. Sorry if it is. =( But you're help is much appreciated.
It's possible I'm just misunderstanding what you meant in your question, but I feel I should point out that in general, it's a bad idea to try to read the entire file at once, in case you deal with something that's just too big for your memory to handle.
It's smarter to allocate a buffer of a specific size, say 8192 bytes (well, that's what I tend to do a lot, anyway), and just always read and send that much, as much as necessary, until your read() operation returns 0 (and no errno set) for end of stream.
I suspect you have a stackoverflow (I should get bonus points for using that term on this site).
The problem is you are allocating the buffer for the entire file on the stack all at once. For larger files, this buffer is larger than the stack, and the next time you try to call a function (and thus put some parameters for it on the stack) the program crashes.
The crash appears at the open line because allocating the buffer on the stack doesn't actually write any memory, it just changes the stack pointer. When your call to open tries tow rite the parameters to the stack, the top of the stack is now overflown and this causes a crash.
The solution is as Platinum Azure or dreamlax suggest, read in the file little bits at a time or allocate your buffer on the heap will malloc or new.
Rather than using a variable length array, perhaps try allocated the memory using malloc.
char *buffer = malloc (len);
...
free (buffer);
I just did some simple tests on my system, and when I use variable length arrays of a big size (like the size you're having trouble with), I also get a SEGFAULT.
You're allocating the buffer on the stack, and it's way too big.
When you allocate storage on the stack, all the compiler does is decrease the stack pointer enough to make that much room (this keeps stack variable allocation to constant time). It does not try to touch any of this stacked memory. Then, when you call open(), it tries to put the parameters on the stack and discovers it has overflowed the stack and dies.
You need to either operate on the file in chunks, memory-map it (mmap()), or malloc() storage.
Also, path should be declared const char*.
I'm looking at some legacy Linux code which uses pthreads.
In one thread a file is read via fgets(). The FILE variable is a global variable shared across all threads. (Hey, I didn't write this...)
In another thread every now and again the FILE is closed and reopened with another filename.
For several seconds after this has happened, the thread fgets() acts as if it is continuing to read the last record it read from the previous file: almost as if there was an error but fgets() was not returning NULL. Then it sorts itself out and starts reading from the new file.
The code looks a bit like this (snipped for brevity so I hope it's still intelligible):
In one thread:
while(gRunState != S_EXIT){
nanosleep(&timer_delay,0);
flag = fgets(buff, sizeof(buff), gFile);
if (flag != NULL){
// do something with buff...
}
}
In the other thread:
fclose(gFile);
gFile = fopen(newFileName,"r");
There's no lock to make sure that the fgets() is not called at the same time as the fclose()/fopen().
Any thoughts as to failure modes which might cause fgets() to fail but not return NULL?
How the described code goes wrong
The stdio library buffers data, allocating memory to store the buffered data. The GNU C library dynamically allocates file structures (some libraries, notably on Solaris, use pointers to statically allocated file structures, but the buffer is still dynamically allocated unless you set the buffering otherwise).
If your thread works with a copy of a pointer to the global file pointer (because you passed the file pointer to the function as an argument), then it is conceivable that the code would continue to access the data structure that was orginally allocated (even though it was freed by the close), and would read data from the buffer that was already present. It would only be when you exit the function, or read beyond the contents of the buffer, that things start going wrong - or the space that was previously allocated to the file structure is reallocated for a new use.
FILE *global_fp;
void somefunc(FILE *fp, ...)
{
...
while (fgets(buffer, sizeof(buffer), fp) != 0)
...
}
void another_function(...)
{
...
/* Pass global file pointer by value */
somefunc(global_fp, ...);
...
}
Proof of Concept Code
Tested on MacOS X 10.5.8 (Leopard) with GCC 4.0.1:
#include <stdio.h>
#include <stdlib.h>
FILE *global_fp;
const char etc_passwd[] = "/etc/passwd";
static void error(const char *fmt, const char *str)
{
fprintf(stderr, fmt, str);
exit(1);
}
static void abuse(FILE *fp, const char *filename)
{
char buffer1[1024];
char buffer2[1024];
if (fgets(buffer1, sizeof(buffer1), fp) == 0)
error("Failed to read buffer1 from %s\n", filename);
printf("buffer1: %s", buffer1);
/* Dangerous!!! */
fclose(global_fp);
if ((global_fp = fopen(etc_passwd, "r")) == 0)
error("Failed to open file %s\n", etc_passwd);
if (fgets(buffer2, sizeof(buffer2), fp) == 0)
error("Failed to read buffer2 from %s\n", filename);
printf("buffer2: %s", buffer2);
}
int main(int argc, char **argv)
{
if (argc != 2)
error("Usage: %s file\n", argv[0]);
if ((global_fp = fopen(argv[1], "r")) == 0)
error("Failed to open file %s\n", argv[1]);
abuse(global_fp, argv[1]);
return(0);
}
When run on its own source code, the output was:
Osiris JL: ./xx xx.c
buffer1: #include <stdio.h>
buffer2: ##
Osiris JL:
So, empirical proof that on some systems, the scenario I outlined can occur.
How to fix the code
The fix to the code is discussed well in other answers. If you avoid the problem I illustrated (for example, by avoiding global file pointers), that is simplest. Assuming that is not possible, it may be sufficient to compile with the appropriate flags (on many Unix-like systems, the compiler flag '-D_REENTRANT' does the job), and you will end up using thread-safe versions of the basic standard I/O functions. Failing that, you may need to put explicit thread-safe management policies around the access to the file pointers; a mutex or something similar (and modify the code to ensure that the threads use the mutex before using the corresponding file pointer).
A FILE * is just a pointer to the various resources. If the fclose does not zero out those resource, it's possible that the values may make enough sense that fgets does not immediately notice it.
That said, until you add some locking, I would consider this code completely broken.
Umm, you really need to control access to the FILE stream with a mutex, at the minimum. You aren't looking at some clever implementation of lock free methods, you are looking at really bad (and dusty) code.
Using thread local FILE streams is the obvious and most elegant fix, just use locks appropriately to ensure no two threads operate on the same offset of the same file at once. Or, more simply, ensure that threads block (or do other work) while waiting for the file lock to clear. POSIX advisory locks would be best for this, or your dealing with dynamically growing a tree of mutexes... or initializing a file lock mutex per thread and making each thread check the other's lock (yuck!) (since files can be re-named).
I think you are staring down the barrel of some major fixes .. unfortunately (from what you have indicated) there is no choice but to make them. In this case, its actually easier to debug a threaded program written in this manner than it would be to debug something using forks, consider yourself lucky :)
You can also put some condition-wait (pthread_cond_wait) instead of just some nanosleep which will get signaled when intended e.g. when a new file gets fopened.
I often use the website www.cplusplus.com as a reference when writing C code.
I was reading the example cited on the page for fread and had a question.
As an example they post:
/* fread example: read a complete file */
#include <stdio.h>
#include <stdlib.h>
int main () {
FILE * pFile;
long lSize;
char * buffer;
size_t result;
pFile = fopen ( "myfile.bin" , "rb" );
if (pFile==NULL) {fputs ("File error",stderr); exit (1);}
// obtain file size:
fseek (pFile , 0 , SEEK_END);
lSize = ftell (pFile);
rewind (pFile);
// allocate memory to contain the whole file:
buffer = (char*) malloc (sizeof(char)*lSize);
if (buffer == NULL) {fputs ("Memory error",stderr); exit (2);}
// copy the file into the buffer:
result = fread (buffer,1,lSize,pFile);
if (result != lSize) {fputs ("Reading error",stderr); exit (3);}
/* the whole file is now loaded in the memory buffer. */
// terminate
fclose (pFile);
free (buffer);
return 0;
}
It seems to me that that if result != lSize, then free(buffer) will never get called. Would this be a memory leak in this example?
I have always thought the examples on their site are of a very high quality. Perhaps I am not understanding correctly?
It wouldn't be a memory leak in this example, because terminating the program (by calling exit()) frees all memory associated with it.
However, it would be a memory leak if you used this piece of code as a subroutine and called something like return 1; in place of exit().
Technically, yes it is a memory leak. But any memory allocated by a process is automatically freed when that process terminates, so in this example the calls to free (and fclose) are not really required.
In a more complex program, this would probably be a real problem. The missing free would create a memory leak and the missing fclose would cause a resource leak.
The operating system cleans up any unfreed memory by a process when that process closes. At least, modern operating systems do.
If the program were not exiting at the point result != lSize, that is, it continued with some other path of execution, then yes - it is a guaranteed memory leak.
There are two possible paths.
(1) result != lSize - in this case, exit(0) is called. This kills the process and the operating system will clean up the memory.
(2) result == lsize - in this case, the buffer is explicitly freed, but return is called right afterwards so the free is mostly just good style because this also kills the process and the operating system will, again, clean up the memory.
So in this simple case, there is no memory leak. But it is probably a good practice to just make sure you're freeing any memory you've allocated in any application you write. Getting into this habit will prevent many headaches for you in the future.
As to possible memory leakage, other's have already answered that question. A while ago, I posted a variation of the given code which should handle all possible error conditions correctly:
fsize()
fget_contents()