When are shared library functions loaded into the heap? - c

(This question concerns only the logical addresses)
I was experimenting with some code where I print out the addresses of different types/scopes of variables to better visualize the process image.
My confusion arose when I printed the addresses a few variables that have been allocated on the heap by malloc, and then also printed the address of the printf function out of curiosity.
What I discovered is that printf was being stored at a much higher address (ie closer to the stack) on the heap than my malloc allocated variables. This doesn't make sense to me because I assumed that the library functions would be loaded on the heap first thing at runtime before any other instructions are executed. I even put a printf statement before any malloc statements, in case library functions were loaded on the fly as they were needed, but it didn't change anything.
Thanks.

(This answer concerns Unix only. I don't know how it is on Windows.)
Most shared libraries are loaded into RAM before control reaches main, and the library containing printf definitely will be. The functions in dlfcn.h can be used to load more shared libraries during execution of the program, that's the most important exception.
Shared libraries have never been loaded as part of the "heap", if by that you mean the region of memory used to satisfy malloc requests. They are loaded using the system primitive mmap, and could be placed anywhere at all in memory. As user3386109 pointed out in a comment on the question, on modern systems, their locations are intentionally randomized as a countermeasure for various exploits.

Related

Is there a way to identify details about memory allocations from a library

My process which is linked to multiple libraries is causing a memory leak. The memory leak is coming from one of the libraries. I am trying to see if there is a way to identify the memory allocated from the functions residing in these libraries. what size each library is using?
Would memory allocator follow any specific way while allocating based on where malloc is called from. Like, if it is called from Lib A, allocation will happen from address starts from 0xA, for lib B, 0xB etc.
Basically, I 'm trying to see if there is a way to identify the leaking library and leaked memory and to dump that.
This is going to be a bit hard if you do it without the help of external tools. You have to be aware that there's nothing like a "gauge" telling your process how much memory it's actually using, and which library function allocated that memory. This has basically to do with two things:
Your OS, which is handing your process the memory, doesn't care or know which library asked for memory -- getting a new page mapped into your process' memory is just a syscall like any other.
Usually, libc's are the ones providing functionality like malloc() and free() to programs/libraries/programmers. These functions wrap your OS'es memory assignment/unassignment (in fact, mapping and unmapping) functionalities; this allows you to allocate and free memory in units that are not multiples of page sizes (4kB, usually). However, this also means that you can't really rely on your OS to tell you how much of the memory your process got really is in use, how much has been properly cleaned up, and how much is leaking.
Therefore, you'll need some mechanism to deal with libc and your OS, that allows you to inspect what your process is doing inside. A typical tool is Valgrind. It's not overly complex, so I encourage you to try it.

Debugging memory leak issues without any tool

Interviewer - If you have no tools to check how would you detect memory leak problems?
Answer - I will read the code and see if all the memory I have allocated has been freed by me in the code itself.
Interviewer wasn't satisfied. Is there any other way to do so?
For all the implementation defined below, one needs to write wrappers for malloc() & free() functions.
To keep things simple, keep track of count of malloc() & free(). If not equal then you have a memory leak.
A better version would be to keep track of the addresses malloc()'ed & free()'ed this way you can identify which addresses are malloc()'ed but not free()'ed. But this again, won't help much either, since you can't relate the addresses to source code, especially it becomes a challenge when you have a large source code.
So here, you can add one more feature to it. For eg, I wrote a similar tool for FreeBSD Kernel, you can modify the malloc() call to store the module/file information (give each module/file a no, you can #define it in some header), the stack trace of the function calls leading to this malloc() and store it in a data structure, along side the above information whenever a malloc() or free() is called. Use addresses returned by malloc() to match with it free(). So, when their's a memory leak, you have information about what addresses were not free()'ed in which file, what were the exact functions called (through the stack trace) to pin point it.
The way, this tool worked was, on a crash, I used to get a core-dump. I had defined globals (this data structure where I was collecting data) in kernel memory space, which I could access using gdb and retrieve the information.
Edit:
Recently while debugging a memeory leak in linux kernel, I came across this tool called kmemleak which implements a similar algorithm I described in point#3 above. Read under the Basic Algorithm section here: https://www.kernel.org/doc/Documentation/kmemleak.txt
My response when I had to do this for real was to build tools... a debugging heap layer, wrapped around the C heap, and macros to switch code to running against those calls rather than accessing the normal heap library directly. That layer included some fencepost logic to detect array bounds violations, some instrumentation to monitor what the heap was doing, optionally some recordkeeping of exactly who allocated and freed each block...
Another approach, of course, is "divide and conquer". Build unit tests to try to narrow down which operations are causing the leak, then to subdivide that code further.
Depending on what "no tools" means, core dumps are also sometimes useful; seeing the content of the heap may tell you what's being leaked, for example.
And so on....

What happens when different object files use different malloc implementations

I have a couple of questions.
Suppose a program is compiled using 2 object files. Each uses malloc and free in most of their functions. But these object files were generated at different times and happen to be using different malloc implementations. Let's say the implementations share variable names and function names. Will the program work fine or not? Why?
If a program has object file 1 and 2, code from object file 1 call malloc and allocates some memory then frees it. Now code from object file 2 calls malloc. Can it use the memory that was freed? How does it work underneath?
Trying to provide a useful answer, even though it's far from complete.
Part 1.
First, it's hard enough to link the program with two implementations of malloc sharing function names: duplicate definitions usually cause linker errors. I can see how we manage to do it using GNU binutils, and there probably are some equivalent tricks for other toolchains. For the rest of the answer, let's assume we managed to link two implementations successfully. (It's usually a good thing that you get linker errors instead of mixing two implementations, possibly even introducing malloc/free asymmetry which has almost no chance to work).
Let's also assume that memory allocated with one particular implementation is always freed using free from the same implementation. Otherwise, it's virtually guaranteed to fail.
Two implementations may work together, or they may interfere, depending on how they request more memory from the OS when their local heaps run out of space. MS Windows has a system interface for managing heaps, and two different mallocs are likely to be built on top of it; then nothing prevents them from working together. Implementations requesting memory with sbrk-like call will work together if they're both ready that someone else will request sbrk increase independently of malloc. I'd expect that malloc from glibc won't fail here, but I'm not really sure.
Part 2.
If the implementation used by object 1 is able to return memory to OS, memory can be reused by the implementation called by object 2. That is, memory reuse may happen but it's less likely than when a single implementation is used.
The possibility of returning memory to OS depends on malloc/free implementation, and may also depend on allocated chunk size and various system settings. For example, glibc uses anonymous mmap for large chunks of memory, and these chunks are unmapped when freed.

Determine total memory usage of embedded C program

I would like to be able to debug how much total memory is being used by C program in a limited resource environment of 256 KB memory (currently I am testing in an emulator program).
I have the ability to print debug statements to a screen, but what method should I use to calculate how much my C program is using (including globals, local variables [from perspective of my main function loop], the program code itself etc..)?
A secondary aspect would be to display the location/ranges of specific variables as opposed to just their size.
-Edit- The CPU is Hitachi SH2, I don't have an IDE that lets me put breakpoints into the program.
Using the IDE options make the proper actions (mark a checkobx, probably) so that the build process (namely, the linker) will generate a map file.
A map file of an embedded system will normally give you the information you need in a detailed fashion: The memory segments, their sizes, how much memory is utilzed in each one, program memory, data memory, etc.. There is usually a lot of data supplied by the map file, and you might need to write a script to calculate exactly what you need, or copy it to Excel. The map file might also contain summary information for you.
The stack is a bit trickier. If the map file gives that, then there you have it. If not, you need to find it yourself. Embedded compilers usually let you define the stack location and size. Put a breakpoint in the start of you program. When the application stops there zero the entire stack. Resume the application and let it work for a while. Finally stop it and inspect the stack memory. You will see non-zero values instead of zeros. The used stack goes until the zeros part starts again.
Generally you will have different sections in mmap generated file, where data goes, like :
.intvect
.intvect_end
.rozdata
.robase
.rosdata
.rodata
.text .... and so on!!!
with other attributes like Base,Size(hex),Size(dec) etc for each section.
While at any time local variables may take up more or less space (as they go in and out of scope), they are instantiated on the stack. In a single threaded environment, the stack will be a fixed allocation known at link time. The same is true of all statically allocated data. The only run-time variable part id dynamically allocated data, but even then sich data is allocated from the heap, which in most bare-metal, single-threaded environments is a fixed link-time allocation.
Consequently all the information you need about memory allocation is probably already provided by your linker. Often (depending on your tool-chain and linker parameters used) basic information is output when the linker runs. You can usually request that a full linker map file is generated and this will give you detailed information. Some linkers can perform stack usage analysis that will give you worst case stack usage for any particular function. In a single threaded environment, the stack usage from main() will give worst case overall usage (although interrupt handlers need consideration, the linker is not thread or interrupt aware, and some architectures have separate interrupt stacks, some are shared).
Although the heap itself is typically a fixed allocation (often all the available memory after the linker has performed static allocation of stack and static data), if you are using dynamic memory allocation, it may be useful at run-time to know how much memory has been allocated from the heap, as well as information about the number of allocations, average size of allocation, and the number of free blocks and their sizes also. Because dynamic memory allocation is implemented by your system's standard library any such analysis facility will be specific to your library, and may not be provided at all. If you have the library source you could implement such facilities yourself.
In a multi-threaded environment, thread stacks may be allocated statically or from the heap, but either way the same analysis methods described above apply. For stack usage analysis, the worst-case for each thread is measured from the entry point of each thread rather than from main().

malloc in an embedded system without an operating system

This query is regarding allocation of memory using malloc.
Generally what we say is malloc allocates memory from heap.
Now say I have a plain embedded system(No operating system), I have normal program loaded where I do malloc in my program.
In this case where is the memory allocated from ?
malloc() is a function that is usually implemented by the runtime-library. You are right, if you are running on top of an operating system, then malloc will sometimes (but not every time) trigger a system-call that makes the OS map some memory into your program's address space.
If your program runs without an operating system, then you can think of your program as being the operating system. You have access to all addresses, meaning you can just assign an address to a pointer, then de-reference that pointer to read/write.
Of course you have to make sure that not other parts of your program just use the same memory, so you write your own memory-manager:
To put it simply you can set-aside a range of addresses which your "memory-manager" uses to store which address-ranges are already in use (the datastructures stored in there can be as easy as a linked list or much much more complex). Then you will write a function and call it e.g. malloc() which forms the functional part of your memory-manager. It looks into the mentioned datastructure to find an address of ranges that is as long as the argument specifies and return a pointer to it.
Now, if every function in your program calls your malloc() instead of randomly writing into custom addresses you've done the first step. You can write a free()-function which will look for the pointer it is given in the mentioned datastructure, and adapts the datastructure (in the naive linked-list it would merge two links).
The only real answer is "Wherever your compiler/library-implementation puts it".
In the embedded system I use, there is no heap, since we haven't written one.
From the heap as you say. The difference is that the heap is not provided by the OS. Your application's linker script will no doubt include an allocation for the heap. The run-time library will manage this.
In the case of the Newlib C library often used in GCC based embedded systems not running an OS or at least not running Linux, the library has a stub syscall function called sbrk(). It is the respnsibility of the developer to implement sbrk(), which must provide more memory the the heap manager on request. Typically it merely increments a pointer and returns a pointer to the start of the new block, thereafter the library's heap manager manages and maintains the new block which may or may not be contiguous with previous blocks. The previous link includes an example implementation.

Resources