SDL Events Memory Leak - c

Are there any know methods or functions in SDL known to cause memory leaks?
I noticed for my program that as time when on, .1 MB of memory kept on being tacked onto the program's memory usage (ie. an extra '.4 MB' were added in exactly 3 minutes).
I commented out all of my surface drawing/bliting functions; pretty much just isolated the main game loop to the event structure and screen flipping, ex:
// .. Intilize
char quit = 0;
Uint8* keystate = NULL;
SDL_Event hEvent;
while (!quit)
{
// .. Code
while (SDL_PollEvents(&hVvent)) {
keystate = SDL_GetKeystate(NULL);
// .. Event processing
}
// .. More Code
if (SDL_Flip(screen) == -1)
return 1
SDL_Delay(1);
}
// .. Cleanup

My favourite tool to check for memory leaks is Valgrind.
After compiling your code, just run the following command:
valgrind --leak-check=full --show-reachable=yes ./executable
After finishing, check the output for memory leak information.
The tool can be more verbose, by issuing the -v flag

valgrind --track-origins=yes --leak-check=full --show-reachable=yes ./executable

Related

Determining the proper predefined array size in C?

In the following code I have the array size set to 20. In Valgrind the code tests clean. But as soon as I change the size to 30, it gives me errors (showed further below). The part that confuses me is that I can change the value to 40 and the errors go away. Change it to 50, errors again. Then 60 tests clean and so on. Keeps going like that. So I was hoping someone might be able to explain this to me. Because it's not quite coming clear to me despite my best efforts to wrap my head around it. These errors were hard to pinpoint because the code by all appearances was valid.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct record {
int number;
char text[30];
};
int main(int argc, char *argv[])
{
FILE *file = fopen("testfile.bin", "w+");
if (ferror(file)) {
printf("%d: Failed to open file.", ferror(file));
}
struct record rec = { 69, "Some testing" };
fwrite(&rec, sizeof(struct record), 1, file);
if (ferror(file)) {
fprintf(stdout,"Error writing file.");
}
fflush(file);
fclose(file);
}
Valgrind errors:
valgrind --leak-check=full --show-leak-kinds=all\
--track-origins=yes ./fileio
==6675== Memcheck, a memory error detector
==6675== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==6675== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==6675== Command: ./fileio
==6675==
==6675== Syscall param write(buf) points to uninitialised byte(s)
==6675== at 0x496A818: write (in /usr/lib/libc-2.28.so)
==6675== by 0x48FA85C: _IO_file_write##GLIBC_2.2.5 (in /usr/lib/libc-2.28.so)
==6675== by 0x48F9BBE: new_do_write (in /usr/lib/libc-2.28.so)
==6675== by 0x48FB9D8: _IO_do_write##GLIBC_2.2.5 (in /usr/lib/libc-2.28.so)
==6675== by 0x48F9A67: _IO_file_sync##GLIBC_2.2.5 (in /usr/lib/libc-2.28.so)
==6675== by 0x48EEDB0: fflush (in /usr/lib/libc-2.28.so)
==6675== by 0x109288: main (fileio.c:24)
==6675== Address 0x4a452d2 is 34 bytes inside a block of size 4,096 alloc'd
==6675== at 0x483777F: malloc (vg_replace_malloc.c:299)
==6675== by 0x48EE790: _IO_file_doallocate (in /usr/lib/libc-2.28.so)
==6675== by 0x48FCBBF: _IO_doallocbuf (in /usr/lib/libc-2.28.so)
==6675== by 0x48FBE47: _IO_file_overflow##GLIBC_2.2.5 (in /usr/lib/libc-2.28.so)
==6675== by 0x48FAF36: _IO_file_xsputn##GLIBC_2.2.5 (in /usr/lib/libc-2.28.so)
==6675== by 0x48EFBFB: fwrite (in /usr/lib/libc-2.28.so)
==6675== by 0x10924C: main (fileio.c:19)
==6675== Uninitialised value was created by a stack allocation
==6675== at 0x109199: main (fileio.c:11)
==6675==
==6675==
==6675== HEAP SUMMARY:
==6675== in use at exit: 0 bytes in 0 blocks
==6675== total heap usage: 2 allocs, 2 frees, 4,648 bytes allocated
==6675==
==6675== All heap blocks were freed -- no leaks are possible
==6675==
==6675== For counts of detected and suppressed errors, rerun with: -v
==6675== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
The problem is that there is padding in the structure to make the int a always aligned by 4 in memory, even within an array of struct records. Now, 20+4 is divisible by 4, and so is 40+4 and 60+4. But 30+4 and 50+4 are not. Hence 2 padding bytes need to be added to make the sizeof (struct record) divisible by 4.
When you're running the code with array size 34, sizeof (struct record) == 36, and bytes 35 and 36 contain indeterminate values - even if the struct record is otherwise fully initialized. What is worse, code that writes indeterminate values can leak sensitive information - the Heartbleed bug being a prime example.
The solution is actually to not write the structure using fwrite. Instead write the members individually - this improves portability too. There isn't much performance difference either, as fwrite buffers the writes and so does fread.
P.S. the road to hell is paved with packed structs, you want to avoid them like plague in generic code.
P.P.S. ferror(file) will almost certainly never be true just after fopen - and in normal failures fopen will return NULL and ferror(NULL) will probably lead to a crash.
[edit]
My answer relates to a weakness in OP's code, yet the Valgrind write(buf) points to uninitialized byte(s) is due to other reasons answered by others.
When the open fails, ferror(file) is undefined behavior (UB).
if (ferror(file)) is not the right test for determining open success.
FILE *file = fopen("testfile.bin", "w+");
// if (ferror(file)) {
// printf("%d: Failed to open file.", ferror(file));
// }
if (file == NULL) {
printf("Failed to open file.");
return -1; // exit code, do not continue
}
I do not see other obvious errors.
ferror(file) is useful to test the result of I/O, not of opening a file.
I initially misinterpreted the valgrind output, so #chux's deserves acceptance. I'll try to put together the best answer I can though.
Checking errors
The first error (the one I didn't immediately consider) is to check the value returned by fopen(3) with ferror(3). The fopen(3) call returns NULL on error (and sets errno), so checking NULL with ferror(3) is wrong.
Serializing a structure on a file.
With the initialization you write all the fields of your structure, but you don't initialize all the memory it covers. Your compiler might for example leave some padding in the structure, in order to get better performance while accessing data. As you write the whole structure on the file, you are actually passing non-initialized data to the fwrite(3) function.
By changing the size of the array you change Valgrind's behaviour. Probably this is due to the fact that the compiler changes the layout of the structure in memory, and it uses a different padding.
Try wiping the rec variable with memset(&rec, 0, sizeof(rec)); and Valgrind should stop complaining. This will only fix the symptom though: since you are serializing binary data, you should mark struct record with __attribute__((packed)).
Initializing memory
Your original initialization is good.
An alternative way of initializing data is to use strncpy(3). Strncpy will accept as parameters a pointer to the destination to write, a pointer to the source memory chunk (where data should be taken from) and the available write size.
By using strncpy(&rec.text, "hello world", sizeof(rec.text) you write "hello world" over the rec.text buffer. But you should pay attention to the termination of the string: strncpy won't write beyond the given size, and if the source string is longer than that, there won't be any string terminator.
Strncpy can be used safely as follows
strncpy(&rec.text, "hello world", sizeof(rec.text) - 1);
rec.text[sizeof(rec.text) - 1] = '\0';
The first line copies "hello world" to the target string. sizeof(rec.text) - 1 is passed as size, so that we leave room for the \0 terminator, which is written explicitly as last character to cover the case in which sizeof(rec.text) is shorter than "hello world".
Nitpicks
Finally, error notifications should go to stderr, while stdout is for results.

Still reachable with puts and printf

Valgrind is reporting the still reachable "error" on functions like printf and puts. I really don't know what to do about this. I need to get rid of it since it's a school project and there has to be no errors at all. How do I deal with this? From the report I can see those functions use malloc, but I always thought they handled the memory by themselves, right?
I'm using mac OS X so maybe it's a problem between valgrind and the OS?
SAMPLE CODE: The error appears on any of the puts or printf that are used
void twittear(array_t* array_tweets, hash_t* hash, queue_t* queue_input){
char* user = queue_see_first(queue_input);
char key[1] = "#";
if (!user || user[0] != key[0]) {
puts("ERROR_WRONG_COMAND");
return;
}
queue_t* queue_keys = queue_create();
char* text = join_text(queue_input, queue_keys);
if (!text) {
puts("ERROR_TWEET_TOO_LONG");
queue_destroy(queue_keys, NULL);
return;
}
int id = new_tweet(array_tweets, text);
while (!queue_is_empty(queue_keys))
hash_tweet(hash, queue_dequeue(queue_keys), id);
queue_destroy(queue_keys, NULL);
printf("OK %d\n", id);
}
ERROR:
==1954== 16,384 bytes in 1 blocks are still reachable in loss record 77 of 77
==1954== at 0x47E1: malloc (vg_replace_malloc.c:300)
==1954== by 0x183855: __smakebuf (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x198217: __swsetup (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x1B1158: __v2printf (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x1B16AF: __xvprintf (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x188B29: vfprintf_l (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x18696F: printf (in /usr/lib/system/libsystem_c.dylib)
==1954== by 0x1000036F3: twittear (main.c:138)
==1954== by 0x100003C8D: main (main.c:309
Valgrind is somewhat glitchy on Mac OS X; it doesn't have complete suppressions for some system libraries. (That is, it doesn't properly ignore some "expected" leaks.) Results will frequently include some spurious results like this, as well as some buffer overruns resulting from optimizations in functions like memcpy().
My advice? Avoid using valgrind on Mac OS X unless you are very familiar with the tool. Compile and test your application on a Linux system for best results.
This is caused by the stdio library. A "Hello World" program is sufficient to reproduce it - just printf or fprintf to stdout or stderr. The first time you write to a FILE, it uses malloc to allocate an output buffer. This buffer allocation happens inside the __swsetup() function (download LibC source from Apple, you will see this in there; but actually, it is copied from FreeBSD, so many *BSD systems have the same function.) Now, when you call fclose() on the FILE, the buffer will be freed. The issue with standard streams (stdout, stderr), is one normally doesn't close them, as a result this buffer will normally never be freed.
You can make this "leak" go away by adding an fclose() on stdout and/or stderr before terminating your program. But honestly, there is no need to do that, you can just ignore it. This is a fixed sized buffer which is not going to grow in size, so it is not a "leak" as such. Closing stdout/stderr at the end of your program is not achieving anything useful.

how to use Dtrace to check malloc on Solaris 10?

I meet a troublesome bug about memory usage, so I want to use Dtrace to check malloc and free on Solaris 10.
I use the following command
dtrace -l | grep malloc
The output is:
7000 fbt unix prom_malloc entry
7001 fbt unix prom_malloc return
7141 fbt genunix cacl_malloc entry
7142 fbt genunix cacl_malloc return
12319 fbt genunix rmallocmap_wait entry
12320 fbt genunix rmallocmap_wait return
13078 fbt genunix rmalloc_wait entry
13079 fbt genunix rmalloc_wait return
13526 fbt genunix rmallocmap entry
13527 fbt genunix rmallocmap return
16846 fbt genunix rmalloc entry
16847 fbt genunix rmalloc return
25931 fbt tmpfs tmp_memalloc entry
25932 fbt tmpfs tmp_memalloc return
It seems there is no malloc.
I have checked Solaris Internal, and found the malloc calls sbrk. So I use the following command:
dtrace -l | grep sbrk
But there is nothing found.
So how can I use Dtrace to check malloc on Solaris 10?
There are various tools that already implement the logic required to identify memory leaks under Solaris,
libumem & mdb (UMEM_DEBUG=default UMEM_LOGGING=transaction LD_PRELOAD=libumem.so.1 then mdb's ::findleaks)
dbx (check -leaks)
Should you still want to go the dtrace way, you need to trace the process you suspect to leak memory using the pid provider. You searched the kernel probes with dtrace -l and found nothing but this is expected as the kernel does not implement malloc or brk. They are userland functions located in the C standard library.
This script will trace every malloc and free calls by a program:
dtrace -qn '
pid$target:libc:malloc:entry {
self->size=arg0;
}
pid$target:libc:malloc:return /self->size/ {
printf("malloc(%d)=%p\n",self->size,arg1);
self->size=0;
}
pid$target:libc:free:entry {
printf("free(%p)\n",arg0);
}
' -c program_to_trace
For more in-depth examples, have a look to http://ewaldertl.blogspot.fr/2010/09/debugging-memory-leaks-with-dtrace-and.html and http://www.joyent.com/blog/bruning-questions-debugging
For example, in order to trace all malloc calls with dtrace, by printing each allocation you would write (to a file named trace-malloc.d in this example) a script like below:
#!/usr/sbin/dtrace -s
pid$1::malloc:entry
{
self->trace = 1;
self->size = arg0;
}
pid$1::malloc:return
/self->trace == 1/
{
/* log the memory allocation */
printf("<__%i;%Y;%d;malloc;0x%x;%d;\n", i++, walltimestamp, tid, arg1, self->size);
ustack(50);
printf("__>\n\n");
self->trace = 0;
self->size = 0;
}
and then call it by passing the process id of the process you want to trace,
for example:
./trace-malloc.d 12345
In complex programs there are very frequent memory allocations and de-allocations so I've written a small program to help me identify memory leaks with dtrace. Each memory operation with malloc / calloc / realloc and free is traced and then an analysis program reads and processes all the traces and points out suspected memory leaks, also using various heuristics to point out strongly suspected memory leaks. If you are interested you can check it out here:
https://github.com/ppissias/DTLeakAnalyzer.

Where is the uninitialized value in this function?

I am running a debug-version of my C binary within valgrind, which returns numerous errors of the sort Conditional jump or move depends on uninitialised value(s).
Using the symbol table, valgrind tells me where to look in my program for this issue:
==23899== 11 errors in context 72 of 72:
==23899== Conditional jump or move depends on uninitialised value(s)
==23899== at 0x438BB0: _int_free (in /foo/bar/baz)
==23899== by 0x43CF75: free (in /foo/bar/baz)
==23899== by 0x4179E1: json_tokener_parse_ex (json_tokener.c:593)
==23899== by 0x418DC8: json_tokener_parse (json_tokener.c:108)
==23899== by 0x40122D: readJSONMetadataHeader (metadataHelpers.h:345)
==23899== by 0x4019CB: main (baz.c:90)
I have the following function readJSONMetadataHeader(...) that calls json_tokener_parse():
int readJSONMetadataHeader(...) {
char buffer[METADATA_MAX_SIZE];
json_object *metadataJSON;
int charCnt = 0;
...
/* fill up the `buffer` variable here; basically a */
/* stream of characters representing JSON data... */
...
/* terminate `buffer` */
buffer[charCnt - 1] = '\0';
...
metadataJSON = json_tokener_parse(buffer);
...
}
The function json_tokener_parse() in turn is as follows:
struct json_object* json_tokener_parse(const char *str)
{
struct json_tokener* tok;
struct json_object* obj;
tok = json_tokener_new();
obj = json_tokener_parse_ex(tok, str, -1);
if(tok->err != json_tokener_success)
obj = (struct json_object*)error_ptr(-tok->err);
json_tokener_free(tok);
return obj;
}
Following the trace back to readJSONMetadataHeader(), it seems like the uninitialized value is the char [] (or const char *) variable buffer that is fed to json_tokener_parse(), which in turn is fed to json_tokener_parse_ex().
But the buffer variable gets filled with data and then terminated before the json_tokener_parse() function is called.
So why is valgrind saying this value is uninitialized? What am I missing?
It looks from the valgrind error report as if your application is statically linked (in particular, free appears to be in the main executable, and not libc.so.6).
Valgrind will report bogus errors for statically linked applications.
More precisely, there are intentional "don't care" errors inside libc. When you link the application dynamically, such errors are suppressed by default (via suppressions file that ships with Valgrind).
But when you link your application statically, Valgrind has no clue that the faulty code come from libc.a, and so it reports them.
Generally, statically linking applications on Linux is a bad idea (TM).
Running such application under Valgrind: doubly so: Valgrind will not be able to intercept malloc/free calls, and will effectively catch only uninitialized memory reads, and not heap buffer overflows (or other heap corruption bugs) that it is usually good at.
I don't see charCnt initialized.
To see if it comes from buffer, simply initialize it with = {0}, this also would make your null termination of the buffer obsolete.
Have a look in json_tokener_parse_ex() which you don't show. It's likely it's trying to free something that's not initialized.
buffer[charCnt - 1] = '\0';
This will at least fail if charCnt happens to be zero.

Valgrind errors even though all heap blocks were freed

I have recently developed a habit of running all of my programs through valgrind to check for memory leaks, but most of its results have been a bit cryptic for me.
For my latest run, valgrind -v gave me:
All heap blocks were freed -- no leaks are possible
That means my program's covered for memory leaks, right?
So what does this error mean? Is my program not reading certain memory blocks correctly?
ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 14 from 9)
1 errors in context 1 of 1:
Invalid read of size 4
at 0x804885B: findPos (in /home/a.out)
by 0xADD918: start_thread (pthread_create.c:301)
by 0xA26CCD: clone (clone.S:133)
Address 0x4a27108 is 0 bytes after a block of size 40 alloc'd
at 0x4005BDC: malloc (vg_replace_malloc.c:195)
by 0x804892F: readInput (in /home/a.out)
by 0xADD918: start_thread (pthread_create.c:301)
by 0xA26CCD: clone (clone.S:133)
used_suppression: 14 dl-hack3-cond-1
Also, what are the so-called "suppressed" errors here?
This seems obvious ... but it might be worth pointing out that the "no leaks are possible" message does not mean that your program cannot leak; it just means that it did not leak in the configuration under which it was tested.
If I run the following with valgrind with no command line parameters, it informs me that no leaks are possible. But it does leak if I provide a command line parameter.
int main( int argc, char* argv[] )
{
if ( argc > 1 )
malloc( 5 );
printf( "Enter any command line arg to cause a leak\n" );
}
Yes, you are greatly covered, don't
think that valgrind easily can miss
a leak in user code
your error means that you probably
have a +1 error in indexing an array
variable. the lines that valgrind
tell you should be accurate, so you
should easily find that, provided you compile all your code with -g
suppressed errors are usually from
system libraries, which sometimes have small leaks or undectable things like the state variables of threads. your manual page should list the suppression file that is used by default
Checking for memory leaks is one reason to use valgrind, but I'd say a better reason is to find more serious errors in your code, such as using an invalid array subscript or dereferencing an uninitialized pointer or a pointer to freed memory.
It's good if valgrind tells you that the code paths you exercised while running valgrind didn't result in memory leaks, but don't let that cause you to ignore reports of more serious errors, such as the one you're seeing here.
As other have suggested, rerunning valgrind after compiling with debug information (-g) would be a good next step.
If you are getting below error:-
"Invalid read of size 4"
Are you freeing memory before and then go to next argument?
I am also getting error because in linked list I am freeing memory first and then go to next element.
Below is my code snippet where I am getting error -
void free_memory(Llist **head_ref)
{
Llist *current=NULL;
current=*head_ref;
while(*head_ref != NULL)
{
current=*head_ref;
free(current);
current=NULL;
(*head_ref)=(*head_ref)->next;
}
}
After changes below is my code snippet -
void free_memory(Llist **head_ref)
{
Llist *current=NULL;
current=*head_ref;
while(*head_ref != NULL)
{
current=*head_ref;
(*head_ref)=(*head_ref)->next;
free(current);
current=NULL;
}
}

Resources