Valgrind complains while working with threads - c

I have created 20 threads to read/write a shared file.I have synchronized threads.
Now My program works fine but when I run it with valgrind it gives me Errors like this:
LEAK SUMMARY:
**definitely lost: 0 bytes in 0 blocks.
\
**possibly lost: 624 bytes in 5 blocks.**
**still reachable: 1,424 bytes in 5 blocks.****
suppressed: 0 bytes in 0 blocks.
Reachable blocks (those to which a pointer was found) are not shown.
Also When I press Ctrl + c , it gives the same errors.
I have not even malloced anything but still valgrind complains.
Any suggestion would be appreciated .

You can run valgrind --leak-check=full ./prog_name to make sure these reachable blocks are not something you can destroy in your program. Many times initializing a library such as libcurl without closing or destroying it will cause leaks. If it's not something you have control over, you can write a suppression file. http://valgrind.org/docs/manual/mc-manual.html section 4.4 has some info and a link to some examples

Sill reachable blocks are probably caused by your standard library not freeing memory used in pools for standard containers (see this faq): which would be a performance optimisation for program exit, since the memory is immediately going to be returned to the operating system anyway.
"Possibly lost" blocks are probably caused by the same thing.
The Valgrind Manual page for memcheck has a good explanation about the different sorts of leaks detected.

Related

valgrind not able find memory leak

I have encountered a weird issue. A process(written in c) is leaking memory but I am not able to locate why it is happening. Memory usage is increasing continuously when process handles traffic and at some point OS(linux) is killing it with error 'out of memory'.
I tried to debug this using valgrind with the following flags:
--show-reachable=yes --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose --track-fds=yes --num-callers=20 --log-file=/tmp/valgrind-out.txt
Output file is as follows:
==5564== LEAK SUMMARY:
==5564== definitely lost: 0 bytes in 0 blocks
==5564== indirectly lost: 0 bytes in 0 blocks
==5564== possibly lost: 646,916 bytes in 1,156 blocks
==5564== still reachable: 4,742,112 bytes in 2,191 blocks
==5564== suppressed: 0 bytes in 0 blocks
Definitely lost is shown as 0 and there is no indication on where it is leaking. I have gone through still reachable segment, they all seem fine.
I wont be able to post code as it is a huge code with 100k+ lines. Basically what it does it sends some packets over tcp socket as a client. Sever is a simple python script which replies with response. My codes works as expected. This leak is the only trouble.
Any suggestion on debugging this issue?

Why my program automatically frees things?

I'm trying to do a program for a university task and I'm receiving this message from Valgrind:
==4244== HEAP SUMMARY:
==4244== in use at exit: 300 bytes in 2 blocks
==4244== total heap usage: 5 allocs, 3 frees, 2,428 bytes allocate
I don't know where this 3 frees comes from because when I erased all of my free functions these 3 stayed there. I thought that when a function returns without freeing things, that'd be a case of memory leak.
My question is: why it isn't? Does a function frees everything declared in it when it returns? If so, how do I know that a function is successfully freeing the stuff declared in there?
Stuff gets malloc()ed and free()ed in code outside of your source, such as library functions. Valgrind sees all of that.

Valgrind showing over 200 allocs for a hello world program on OS X?

I wrote a linked list in C today at work on a Linux machine and everything checked out in Valgrind. Then I ran the same test (a handful of pushes and then deleting the list) at home on OS X and got a crazy amount of allocs.
==4344== HEAP SUMMARY:
==4344== in use at exit: 26,262 bytes in 187 blocks
==4344== total heap usage: 267 allocs, 80 frees, 32,374 bytes allocated
==4344==
==4344== LEAK SUMMARY:
==4344== definitely lost: 0 bytes in 0 blocks
==4344== indirectly lost: 0 bytes in 0 blocks
==4344== possibly lost: 0 bytes in 0 blocks
==4344== still reachable: 0 bytes in 0 blocks
==4344== suppressed: 26,262 bytes in 187 blocks
==4344==
==4344== For counts of detected and suppressed errors, rerun with: -v
==4344== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
I know the code is fine and doesn't have any leaks. So I just commented out the list test and compiled with only printf("test\n"); in the main, and it showed 263 allocs with 76 frees (I had 4 intentional allocs in the list test). Why am I getting so many allocs on OS X? Is this just something the OS did? I don't understand why I'd have 263 allocs when I just did a printf...
OS X has a very bad architecture. Because libdl, libdyld, libm, libc and some other libraries are "packed" into libSystem, all of them are initialized when the library is loaded. Most of them come from dyld. Dyld is written in C and C++, that's why C++ part may push up number of allocs.
This is only Apple thing, not OS X thing. I have written an alternate C library. It does not have many "not-needed allocs".
Also, allocs are caused by opening FILE *s. Note that 3 streams (stdin, stdout and stderr) are initialized on run.
Valgrind support on OS X is currently being actively worked on. Your best approach is to ensure you are using a SVN trunk build, and update frequently.
The errors Valgrind is reporting to you are present within the OS X system libraries. These are not the fault of your program, but because even simple programs including these system libraries Valgrind continues to pick them up. Suppressions within Valgrind trunk are continually being updated to catch these issues, allowing you to focus on the real problems that may be present within your code.
The following commands will allow you to use Valgrind trunk, if you're not already:
svn co svn://svn.valgrind.org/valgrind/trunk valgrind
cd valgrind
./autogen.sh
./configure
make -j4
sudo make install
Full disclosure: I'm one of the Valgrind developers who contributed patches to support OS X 10.11

valgrind: getting still reachable blocks with libssh2

I know this issue has been addressed in few questions on stackoverflow. Experts have mentioned that the report "still reachable" is not a memory leak. They just demand to be freed and they're completely a programmer's choice. But the problem in my case is that I am still not able to identify the pointer(s) that must be freed.
Valgrind clearly shows that there is no loss of memory but still when I have kept my program integrated with rest of the code of zabbix running for 3 days, I have noticed that memory has dropped from 2.75 GB to 2.05 GB (my computer has 4 GB of RAM allocated).
==13630== LEAK SUMMARY:
==13630== definitely lost: 0 bytes in 0 blocks
==13630== indirectly lost: 0 bytes in 0 blocks
==13630== possibly lost: 0 bytes in 0 blocks
==13630== still reachable: 15,072 bytes in 526 blocks
==13630== suppressed: 0 bytes in 0 blocks
I want to show you the whole code that's why I am not pasting it here. Please click here to have a look into the code which will work in eclipse CDT.
Purpose of the code: Basically the code has been re-written by me to allow zabbix server to get system value from zabbix agent installed in a remote machine. This code which I have pasted, creates a file in a directory "/ZB_RQ" of the remote host with a request "vm.memory.size[available]" and zabbix agent writes the proper value back in this file with a prefix "#" to distinguish request from response. In this code, I have considered localhost "127.0.0.1" as a remote host for testing. This program works by using a user "zabbix" with password "bullet123", which you would find out from the code itself. This complete operation is carried out using libssh2 API.
Before you get started: Please create a directory "/ZB_RQ", a user "zabbix" with password "bullet123" and install libssh2-devel on your Linux machine.
How to get this program completely working: When you run this program with valgrind, (after execution of the function "sendViaSFTP") you would find that there is a file created in the directory "/ZB_RQ". The program waits for 3 seconds to get the value back in the same file. If not found, the program will create a new file with same expectation. So, within 3 seconds, from an another terminal, you have to write a sample response into the file (say "#test"). And, thus you could know the whole execution.
So, the moment you kill the whole execution (ctrl + c), the valgrind will show the above result with a very long list of "reachable blocks".
I have made sure to free every libssh2 variables but still I could not figure out why there is a continuous drop in the memory. Is this happening due to piling up of "reachable blocks" ?
If I consider, this is not going to consume all memory, then anyway, please help me to get away from the "reachable blocks".

How can I debug 'zend_mm_heap corrupted' for my php extension

The Problem
I've written a php extension (PHP 5.3) which appears to work fine for simple tests but the moment I start making multiple calls it I start seeing the error:
zend_mm_heap corrupted
Normally through a console or apache error log, I also sometimes see the error
[Thu Jun 19 16:12:31.934289 2014] [:error] [pid 560] [client 127.0.0.1:35410] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 139678164955264 bytes) in Unknown on line 0
What I've tried to do
I've tried find the exact spot where the issue occurs but it appears to occurs between the destructor being called for my php class that calls the extension but before the constructor runs the first line of the constructor (note, I have mainly used phpunit to diagnose this, if I run it in a browser it will usually work once and then throw the error to the log on the next attempt with a 'The connection was reset' in my browser window so no output.
I've tried adding debug lines with memory_get_usage and installing the extension memprof but all output fails to show any serious memory issues and I've never seen a memory usage greater than 8mb.
I've looked at other stack overflow posts with regard to changing php settings to deal with zend_mm_corrupted issue, disabling/enabling garbage collection without any degree of success.
What I'm looking for
I realise that there is not enough information here to possibly know what is causing what I presume to be a memory leak, so what I want to know is what are possible and probable causes of my issue and how can I go about diagnosing this issue to find where the problem is.
Note:
I have tried building my extension with --enable-debug but it comes as unrecognised argument.
Edit: Valgrind
I have run over it with valgrind and got the following output:
--24803-- REDIR: 0x4ebde30 (__GI_strncmp) redirected to 0x4c2dd20 (__GI_strncmp)
--24803-- REDIR: 0x4ec1820 (__GI_stpcpy) redirected to 0x4c2f860 (__GI_stpcpy)
Segmentation fault (core dumped)
==24803==
==24803== HEAP SUMMARY:
==24803== in use at exit: 2,401 bytes in 72 blocks
==24803== total heap usage: 73 allocs, 1 frees, 2,417 bytes allocated
==24803==
==24803== Searching for pointers to 72 not-freed blocks
==24803== Checked 92,624 bytes
==24803==
==24803== LEAK SUMMARY:
==24803== definitely lost: 0 bytes in 0 blocks
==24803== indirectly lost: 0 bytes in 0 blocks
==24803== possibly lost: 0 bytes in 0 blocks
==24803== still reachable: 2,401 bytes in 72 blocks
==24803== suppressed: 0 bytes in 0 blocks
==24803== Reachable blocks (those to which a pointer was found) are not shown.
==24803== To see them, rerun with: --leak-check=full --show-reachable=yes
==24803==
==24803== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
--24803--
--24803-- used_suppression: 2 dl-hack3-cond-1
==24803==
==24803== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
This suggests to me that perhaps the issue isn't a memory leak but am not certain on this.
It appears to me that your program does have heap memory corruption. This is is bit difficult to find by looking out your code snippet or faulty call stack. You may want run your program under some dynamic tools(Valgrind, WindDBG/Pageheap) to track the actual source of error.
$ valgrind --tool=memcheck --db-attach=yes ./a.out
This way Valgrind would attach your program in the debugger when your first memory error is detected so that you can do live debugging(GDB). This should be the best possible way to understand and resolve your problem.
Allowed memory size of 134217728 bytes exhausted (tried to allocate
139678164955264 bytes) in Unknown on line 0
It looks like somewhere in your program signed to unsigned conversion is getting executed. Normally allocators have size parameter of unsigned type so it interpret the negative value to be very large type and under those scenario, allocation would fail.

Resources