I use cJSON in my program to convert my values to JSON and write it to file. Here is the example of my code:
void writeStructToFile(IOPipe this, struct structtype somevalues) {
cJSON *jout = cJSON_CreateObject();
cJSON_AddItemToObject(jout, "V1", cJSON_CreateNumber(somevalues.v1));
cJSON_AddItemToObject(jout, "V2", cJSON_CreateNumber(somevalues.v2));
fprintf(this->outstream, "%s", cJSON_Print(jout));
cJSON_Delete(jout);
}
Works great, but after some time I found that Linux(embedded) kills my program because of abnormal memory use or device(on Cortex A8) just hangs. After debug I found, that leak appears exactly in this function even though I delete the pointer at the end. Could anyone see that leak?
Initially I thought that it might be FILE I/O's internal buffers. But these are flushed automatically when they become too big.
The real leak is that cJSON_Print allocates memory: a char array. You must free this after you're done:
char* text = cJSON_Print(jout);
fprintf(this->outstream, "%s", text);
free(text); // As suggested by PaulPonomarev.
cJSON_Delete(jout);
For a char* allocated cJSON_Print, it is said to use cJSON_FreePrintBuffer.
Related
I am trying to make a function in C to erase all the contents of a temp folder and to erase the folder.
Whilst I already have successfully created the code to cycle through the files and to erase the folder (it is pretty much straight forward) I am having trouble erasing the files using unlink.
Here is the code that I am using:
int delete_folder(char *foldername) {
DIR *dp;
struct dirent *ep;
dp=opendir(foldername);
if (dp!=NULL) {
readdir(dp); readdir(dp);
while (ep=readdir(dp)) {
char* cell = concatenate(concatenate(foldername, "\\"), "Bayesian Estimation.xlsx");//ep->d_name);
printf("%s\n", cell);
remove(cell);
printf("%s\n", strerror(errno));
}
closedir(dp);
}
if (!rmdir(foldername)) {return(0);} else {return(-1);}
}
The code that I wrote is fully functional for all files but those which include spaces in the filename. After some testing, I can guarantee that the unlink functions eliminates all files in the folder (even those with special characters in the filename) but fails if the filename includes a space (however, for this same file, if I remove the space(s), this function works again).
Has anyone else encountered this problem? And, more importantly, can it be solved/circunvented?
(The problem remains even if I introduce the space escape sequences directly)
The error presented by unlink is "No such file or directory" (ENOENT). Mind you that the file is indeed at the referred location (as can be verified by the code outputing the correct filename in the variable cell) and this error also occurs if I use the function remove instead of unlink.
PS: The function concatenate is a function of my own making which outputs the concatenation of the two input strings.
Edit:
The code was written in Codeblocks, in Windows.
Here's the code for the concatenate function:
char* concatenate(char *str1, char *str2) {
int a1 = strlen(str1), a2 = strlen(str2); char* str3[a1+a2+1];
snprintf(str3, a1+a2+2, "%s%s", str1, str2);
return(str3);
}
Whilst you are right in saying that it is a possible (and easy) memory leak, the functions' inputs and outputs are code generated and only for personal use and therefore there is no great reason to worry about it (no real need for foolproofing the code.)
You say "using unlink()" but the code is using remove(). Which platform are you on? Is there any danger that your platform implements remove() by running an external command which doesn't handle spaces in file names properly? On most systems, that won't be a problem.
What is a problem is that you don't check the return value from remove() before printing the error. You should only print the error if the function indicates that it generated an error. No function in the Standard C (or POSIX) library sets errno to zero. Also, errors should be reported on standard error; that's what the standard error stream is for.
if (remove(cell) != 0)
fprintf(stderr, "Failed to remove %s (%d: %s)\n", cell, errno, strerror(errno));
else
printf("%s removed OK\n", cell);
I regard the else clause as a temporary measure while you're getting the code working.
It also looks like you're leaking memory like a proverbial sieve. You capture the result of a double concatenate operation in cell, but you never free it. Indeed, if the nested calls both allocate memory, then you've got a leak even if you add free(cell); at the end of the loop (inside the loop, after the second printf(), the one I deconstructed). If concatenate() doesn't allocate new memory each time (it returns a pointer to statically allocated memory, then I think concatenating a string with the output of concatenate() is also dangerous, probably invoking undefined behaviour as you copy a string over itself. You need to look hard at the code for concatenate(), and/or present it for analyis.
Thank you very much for all your input, after reviewing your comments and making a few experiments myself, I figured out that remove/unlink was not working because the filename was only temporarily saved at variable cell (it was there long enough for it to be printed correctly to console, hence my confusion). After appropriately storing my filename before usage, my problem has been completely solved.
Here's the code (I have already checked it with filenames as complex as I could make them):
int delete_folder(char* foldername) {
DIR *dp;
struct dirent *ep;
dp=opendir(foldername);
if (dp!=NULL) {
readdir(dp); readdir(dp);
while (ep=readdir(dp)) {
char cell[strlen(foldername)+1+strlen(ep->d_name)+1];
strcpy(cell, concatenate(concatenate(foldername, "\\"), ep->d_name));
unlink(cell);
printf("File \"%s\": %s\n", ep->d_name, strerror(errno));
}
closedir(dp);
}
if (!rmdir(foldername)) {return(0);} else {return(-1);}
}
I realize it was kind of a noob mistake, resulting from my being a bit out of practice for a while in programming in C, so... Thank you very much for your all your help!
LPTSTR name = NULL;
DWORD nameLength = 0;
namelength = host->nameLength; // returns 10
name = (LPTSTR) malloc( sizeof(nameLength * sizeof(TCHAR))); //crashes here
I don't understand the reason for its crashing at this point. Could somebody explain why?
Update =*(deleted the next line after the crashing line, had copied it by mistake. was just a commented out line in the code)
UPDATE:
Sorry guys, I had tried all the ways you have described before asking the question. Doesn't work.
I think its some other issue. heres a windows service, calling up the function above (from a dll) when the computer starts, so was doing a remote debugging the dll using windbg ( I break-in using a hard-coded debugbreak, just before the function gets called).
when I am over the malloc step and give a "next step" instruction (F10), it doesn't go to the next step, instead says the client is running, but then suddenly breaks in at nt!DbgLoadImageSymbols with "leave" instruction. Giving a go(F5) after this keeps the machine in a hanged state.
If you crash inside of malloc, then it means that you have previously corrupted the heap (or more accurately, the double-linked lists that organize the heap).
Considering that you have a glaring bug here:
name = (LPTSTR) malloc( sizeof(nameLength * sizeof(TCHAR)));
You should review all of your malloc calls and ensure that you are allocating enough memory. What you have likely done is allocate too small of a buffer, written too much data into the returned pointer corrupting the heap, and then crashed in a later call to malloc.
Since you are on Windows, you can also utilize page-heap verification (via the gflags tool). This will help you catch buffer overwrites when they happen.
Not enough info for an answer, but too much for a comment, sorry. I made a simple main() based as closely on your clues as I can see, with any previously commented errors uncorrected, but extra lines FYI how much memory is allocated. The program compiles and runs without complaint. So your problem has not been properly expressed.
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
int main(){
LPTSTR name = NULL;
DWORD nameLength = 10;
name = (LPTSTR) malloc( sizeof(nameLength * sizeof(TCHAR)));
if (name) {
printf ("Memory for %d bytes allocated\n", sizeof(nameLength * sizeof(TCHAR)));
free (name);
} else {
printf ("No memory for %d bytes\n", sizeof(nameLength * sizeof(TCHAR)));
}
return 0;
}
Program output:
Memory for 4 bytes allocated
What is clear, is that it's unlikely to be enough memory for whatever you have said is 10.
I'm writing a buffer into a binary file. Code is as in the following :
FILE *outwav = fopen(outwav_path, "wb");
if(!outwav)
{
fprintf(stderr, "Can't open file %s for writing.\n", outwav_path);
exit(1);
}
[...]
//Create sample buffer
short *samples = malloc((loopcount*(blockamount-looppos)+looppos) << 5);
if(!samples)
{
fprintf(stderr, "Error : Can't allocate memory.\n");
exit(1);
}
[...]
fwrite(samples, 2, 16*k, outwav); //write samplebuffer to file
fflush(outwav);
fclose(outwav);
free(samples);
The last free() call causes me random segfaults.
After several headaches I thought it was probably because the fwrite call would execute only after a delay, and then it would read freed memory. So I added the fflush call, yet, the problem STILL occurs.
The only way to get rid of it is to not free the memory and let the OS do it for me. This is supposed to be bad practice though, so I'd rather ask if there is no better solution.
Before anyone asks, yes I check that the file is opened correctly, and yes I test that the memory is allocated properly, and no, I don't touch the returned pointers in any way.
Once fwrite returns you are free to do whatever you want with the buffer. You can remove the fflush call.
It sounds like a buffer overflow error in a totally unrelated part of the program is writing over the book-keeping information that free needs to do its work. Run your program under a tool like valgrind to find out if this is the problem and to find the part of the program that has a buffer overflow.
I am running a debug-version of my C binary within valgrind, which returns numerous errors of the sort Conditional jump or move depends on uninitialised value(s).
Using the symbol table, valgrind tells me where to look in my program for this issue:
==23899== 11 errors in context 72 of 72:
==23899== Conditional jump or move depends on uninitialised value(s)
==23899== at 0x438BB0: _int_free (in /foo/bar/baz)
==23899== by 0x43CF75: free (in /foo/bar/baz)
==23899== by 0x4179E1: json_tokener_parse_ex (json_tokener.c:593)
==23899== by 0x418DC8: json_tokener_parse (json_tokener.c:108)
==23899== by 0x40122D: readJSONMetadataHeader (metadataHelpers.h:345)
==23899== by 0x4019CB: main (baz.c:90)
I have the following function readJSONMetadataHeader(...) that calls json_tokener_parse():
int readJSONMetadataHeader(...) {
char buffer[METADATA_MAX_SIZE];
json_object *metadataJSON;
int charCnt = 0;
...
/* fill up the `buffer` variable here; basically a */
/* stream of characters representing JSON data... */
...
/* terminate `buffer` */
buffer[charCnt - 1] = '\0';
...
metadataJSON = json_tokener_parse(buffer);
...
}
The function json_tokener_parse() in turn is as follows:
struct json_object* json_tokener_parse(const char *str)
{
struct json_tokener* tok;
struct json_object* obj;
tok = json_tokener_new();
obj = json_tokener_parse_ex(tok, str, -1);
if(tok->err != json_tokener_success)
obj = (struct json_object*)error_ptr(-tok->err);
json_tokener_free(tok);
return obj;
}
Following the trace back to readJSONMetadataHeader(), it seems like the uninitialized value is the char [] (or const char *) variable buffer that is fed to json_tokener_parse(), which in turn is fed to json_tokener_parse_ex().
But the buffer variable gets filled with data and then terminated before the json_tokener_parse() function is called.
So why is valgrind saying this value is uninitialized? What am I missing?
It looks from the valgrind error report as if your application is statically linked (in particular, free appears to be in the main executable, and not libc.so.6).
Valgrind will report bogus errors for statically linked applications.
More precisely, there are intentional "don't care" errors inside libc. When you link the application dynamically, such errors are suppressed by default (via suppressions file that ships with Valgrind).
But when you link your application statically, Valgrind has no clue that the faulty code come from libc.a, and so it reports them.
Generally, statically linking applications on Linux is a bad idea (TM).
Running such application under Valgrind: doubly so: Valgrind will not be able to intercept malloc/free calls, and will effectively catch only uninitialized memory reads, and not heap buffer overflows (or other heap corruption bugs) that it is usually good at.
I don't see charCnt initialized.
To see if it comes from buffer, simply initialize it with = {0}, this also would make your null termination of the buffer obsolete.
Have a look in json_tokener_parse_ex() which you don't show. It's likely it's trying to free something that's not initialized.
buffer[charCnt - 1] = '\0';
This will at least fail if charCnt happens to be zero.
I'm taking a networking class at school and am using C/GDB for the first time. Our assignment is to make a webserver that communicates with a client browser. I am well underway and can open files and send them to the client. Everything goes great till I open a very large file and then I seg fault. I'm not a pro at C/GDB so I'm sorry if that is causing me to ask silly questions and not be able to see the solution myself but when I looked at the dumped core I see my seg fault comes here:
if (-1 == (openfd = open(path, O_RDONLY)))
Specifically we are tasked with opening the file and the sending it to the client browser. My Algorithm goes:
Open/Error catch
Read the file into a buffer/Error catch
Send the file
We were also tasked with making sure that the server doesn't crash when SENDING very large files. But my problem seems to be with opening them. I can send all my smaller files just fine. The file in question is 29.5MB.
The whole algorithm is:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
I dunno if it is just a fact that a I am Unix/C novice. Sorry if it is. =( But you're help is much appreciated.
It's possible I'm just misunderstanding what you meant in your question, but I feel I should point out that in general, it's a bad idea to try to read the entire file at once, in case you deal with something that's just too big for your memory to handle.
It's smarter to allocate a buffer of a specific size, say 8192 bytes (well, that's what I tend to do a lot, anyway), and just always read and send that much, as much as necessary, until your read() operation returns 0 (and no errno set) for end of stream.
I suspect you have a stackoverflow (I should get bonus points for using that term on this site).
The problem is you are allocating the buffer for the entire file on the stack all at once. For larger files, this buffer is larger than the stack, and the next time you try to call a function (and thus put some parameters for it on the stack) the program crashes.
The crash appears at the open line because allocating the buffer on the stack doesn't actually write any memory, it just changes the stack pointer. When your call to open tries tow rite the parameters to the stack, the top of the stack is now overflown and this causes a crash.
The solution is as Platinum Azure or dreamlax suggest, read in the file little bits at a time or allocate your buffer on the heap will malloc or new.
Rather than using a variable length array, perhaps try allocated the memory using malloc.
char *buffer = malloc (len);
...
free (buffer);
I just did some simple tests on my system, and when I use variable length arrays of a big size (like the size you're having trouble with), I also get a SEGFAULT.
You're allocating the buffer on the stack, and it's way too big.
When you allocate storage on the stack, all the compiler does is decrease the stack pointer enough to make that much room (this keeps stack variable allocation to constant time). It does not try to touch any of this stacked memory. Then, when you call open(), it tries to put the parameters on the stack and discovers it has overflowed the stack and dies.
You need to either operate on the file in chunks, memory-map it (mmap()), or malloc() storage.
Also, path should be declared const char*.