I am working on an embedded device with only 512MB of RAM and the device is running Linux kernel. I want to do the memory management of all the processes running in the userspace by my own library. is it possible to do so. from my understanding, the memory management is done by kernel, Is it possible to have that functionality in User space.
If your embedded device runs Linux, it has an MMU. Controling the MMU is normally a privileged operation, so only an operating system kernel has access to it. Therefore the answer is: No, you can't.
Of course you can write software running directly on the device, without operating system, but I guess that's not what you wanted. You should probably take one step back, ask yourself what gave you the idea about the memory management and what could be a better way to solve this original problem.
You can consider using setrlimit. Refer another Q&A.
I wrote the test code and run it on my PC. I can see that memory usage is limited. The exact relationship of units requires further analysis.
#include <stdlib.h>
#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
int main(int argc, char* argv)
{
long limitSize = 1;
long testSize = 140000;
// 1. BEFORE: getrlimit
{
struct rlimit asLimit;
getrlimit(RLIMIT_AS, &asLimit);
printf("BEFORE: rlimit(RLIMIT_AS) = %ld,%ld\n", asLimit.rlim_cur, asLimit.rlim_max);
}
// 2. BEFORE: test malloc
{
char *xx = malloc(testSize);
if (xx == NULL)
perror("malloc FAIL");
else
printf("malloc(%ld) OK\n", testSize);
free(xx);
}
// 3. setrlimit
{
struct rlimit new;
new.rlim_cur = limitSize;
new.rlim_max = limitSize;
setrlimit(RLIMIT_AS, &new);
}
// 4. AFTER: getrlimit
{
struct rlimit asLimit;
getrlimit(RLIMIT_AS, &asLimit);
printf("AFTER: rlimit(RLIMIT_AS) = %ld,%ld\n", asLimit.rlim_cur, asLimit.rlim_max);
}
// 5. AFTER: test malloc
{
char *xx = malloc(testSize);
if (xx == NULL)
perror("malloc FAIL");
else
printf("malloc(%ld) OK\n", testSize);
free(xx);
}
return 0;
}
Result:
BEFORE: rlimit(RLIMIT_AS) = -1,-1
malloc(140000) OK
AFTER: rlimit(RLIMIT_AS) = 1,1
malloc FAIL: Cannot allocate memory
From what I understand of your question, you want to somehow use your own library for handling memory of kernel processes. I presume you are doing this to make sure that rogue processes don't use too much memory, which allows your process to use as much memory as is available. I believe this idea is flawed.
For example, imagine this scenario:
Total memory 512MB
Process 1 limit of 128MB - Uses 64MB
Process 2 imit of 128MB - Uses 64MB
Process 3 limit of 256MB - Uses 256MB then runs out of memory, when in fact 128MB is still available.
I know you THINK this is the answer to your problem, and on 'normal' embedded systems, this would probably work, but you are using a complex kernel, running processes you don't have total control over. You should write YOUR software to be robust when memory gets tight because that is all you can control.
Related
While I can write reasonable C code, my expertise is mainly with Java and so I apologize if this question makes no sense.
I am writing some code to help me do heap analysis. I'm doing this via instrumentation with LLVM. What I'm looking for is a way to access the heap metadata for a process from within itself. Is such a thing possible? I know that information about the heap is stored in many malloc_state structs (main_arena for example). If I can gain access to main_arena, I can start enumerating the different arenas, heaps, bins, etc. As I understand, these variables are all defined statically and so they can't be accessed.
But is there some way of getting this information? For example, could I use /proc/$pid/mem to leak the information somehow?
Once I have this information, I want want to basically get information about all the different freelists. So I want, for every bin in each bin type, the number of chunks in the bin and their sizes. For fast, small, and tcache bins I know that I just need the index to figure out the size. I have looked at how these structures are implemented and how to iterate through them. So all I need is to gain access to these internal structures.
I have looked at malloc_info and that is my fallback, but I would also like to get information about tcache and I don't think that is included in malloc_info.
An option I have considered is to build a custom version of glibc has the malloc_struct variables declared non-statically. But from what I can see, it's not very straightforward to build your own custom glibc as you have to build the entire toolchain. I'm using clang so I would have to build LLVM from source against my custom glibc (at least this is what I've understood from researching this approach).
I had a similar requirement recently, so I do think that being able to get to main_arena for a given process does have its value, one example being post-mortem memory usage analysis.
Using dl_iterate_phdr and elf.h, it's relatively straightforward to resolve main_arena based on the local symbol:
#define _GNU_SOURCE
#include <fcntl.h>
#include <link.h>
#include <signal.h>
#include <stdio.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
// Ignored:
// - Non-x86_64 architectures
// - Resource and error handling
// - Style
static int cb(struct dl_phdr_info *info, size_t size, void *data)
{
if (strcmp(info->dlpi_name, "/lib64/libc.so.6") == 0) {
int fd = open(info->dlpi_name, O_RDONLY);
struct stat stat;
fstat(fd, &stat);
char *base = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
Elf64_Ehdr *header = (Elf64_Ehdr *)base;
Elf64_Shdr *secs = (Elf64_Shdr*)(base+header->e_shoff);
for (unsigned secinx = 0; secinx < header->e_shnum; secinx++) {
if (secs[secinx].sh_type == SHT_SYMTAB) {
Elf64_Sym *symtab = (Elf64_Sym *)(base+secs[secinx].sh_offset);
char *symnames = (char *)(base + secs[secs[secinx].sh_link].sh_offset);
unsigned symcount = secs[secinx].sh_size/secs[secinx].sh_entsize;
for (unsigned syminx = 0; syminx < symcount; syminx++) {
if (strcmp(symnames+symtab[syminx].st_name, "main_arena") == 0) {
void *mainarena = ((char *)info->dlpi_addr)+symtab[syminx].st_value;
printf("main_arena found: %p\n", mainarena);
raise(SIGTRAP);
return 0;
}
}
}
}
}
return 0;
}
int main()
{
dl_iterate_phdr(cb, NULL);
return 0;
}
dl_iterate_phdr is used to get the base address of the mapped glibc. The mapping does not contain the symbol table needed (.symtab), so the library has to be mapped again. The final address is determined by the base address plus the symbol value.
(gdb) run
Starting program: a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff77f0700 (LWP 24834)]
main_arena found: 0x7ffff7baec60
Thread 1 "a.out" received signal SIGTRAP, Trace/breakpoint trap.
raise (sig=5) at ../sysdeps/unix/sysv/linux/raise.c:50
50 return ret;
(gdb) select 1
(gdb) print mainarena
$1 = (void *) 0x7ffff7baec60 <main_arena>
(gdb) print &main_arena
$3 = (struct malloc_state *) 0x7ffff7baec60 <main_arena>
The value matches that of main_arena, so the correct address was found.
There are other ways to get to main_arena without relying on the library itself. Walking the existing heap allows for discovering main_arena, for example, but that strategy is considerably less straightforward.
Of course, once you have main_arena, you need all internal type definitions to be able to inspect the data.
I am writing some code to help me do heap analysis.
What kind of heap analysis?
I want want to basically get information about all the different freelists. So I want, for every bin in each bin type, the number of chunks in the bin and their sizes. For fast, small, and tcache bins I know that I just need the index to figure out the size.
This information only makes sense if you are planning to change the malloc implementation. It does not make sense to attempt to collect it if your goal is to analyze or improve heap usage by the application, so it sounds like you have an XY problem.
In addition, things like bin and tcache only make sense in a context of particular malloc implementation (TCMalloc and jemalloc would not have any bins).
For analysis of application heap usage, you may want to use TCmalloc, as it provides a lot of tools for heap profiling and introspection.
malloc/calloc apparently use swap space to satisfy a request that exceeds available free memory. And that pretty much hangs the system as the disk-use light remains constantly on. After it happened to me, and I wasn't immediately sure why, I wrote the following 5-line test program to check that this is indeed why the system was hanging,
/* --- test how many bytes can be malloc'ed successfully --- */
#include <stdio.h>
#include <stdlib.h>
int main ( int argc, char *argv[] ) {
unsigned int nmalloc = (argc>1? atoi(argv[1]) : 10000000 ),
size = (argc>2? atoi(argv[2]) : (0) );
unsigned char *pmalloc = (size>0? calloc(nmalloc,size):malloc(nmalloc));
fprintf( stdout," %s malloc'ed %d elements of %d bytes each.\n",
(pmalloc==NULL? "UNsuccessfully" : "Successfully"),
nmalloc, (size>0?size:1) );
if ( pmalloc != NULL ) free(pmalloc);
} /* --- end-of-function main() --- */
And that indeed hangs the system if the product of your two command-line args exceeds physical memory. Easiest solution is some way whereby malloc/calloc automatically just fail. Harder and non-portable was to write a little wrapper that popen()'s a free command, parses the output, and only calls malloc/calloc if the request can be satisfied by the available "free" memory, maybe with a little safety factor built in.
Is there any easier and more portable way to accomplish that? (Apparently similar to this question can calloc or malloc be used to allocate ONLY physical memory in OSX?, but I'm hoping for some kind of "yes" answer.)
E d i t--------------
Decided to follow Tom's /proc/meminfo suggestion. That is, rather than popen()'ing "free", just directly parse the existing and easily-parsible /proc/meminfo file. And then, a one-line macro of the form
#define noswapmalloc(n) ( (n) < 1000l*memfree(NULL)/2? malloc(n) : NULL )
finishes the job. memfree(), shown below, isn't as portable as I'd like, but can easily and transparently be replaced by a better solution if/when the need arises, which isn't now.
#include <stdio.h>
#include <stdlib.h>
#define _GNU_SOURCE /* for strcasestr() in string.h */
#include <string.h>
char *strcasestr(); /* non-standard extension */
/* ==========================================================================
* Function: memfree ( memtype )
* Purpose: return number of Kbytes of available memory
* (as reported in /proc/meminfo)
* --------------------------------------------------------------------------
* Arguments: memtype (I) (char *) to null-terminated, case-insensitive
* (sub)string matching first field in
* /proc/meminfo (NULL uses MemFree)
* --------------------------------------------------------------------------
* Returns: ( int ) #Kbytes of memory, or -1 for any error
* --------------------------------------------------------------------------
* Notes: o
* ======================================================================= */
/* --- entry point --- */
int memfree ( char *memtype ) {
/* ---
* allocations and declarations
* ------------------------------- */
static char memfile[99] = "/proc/meminfo"; /* linux standard */
static char deftype[99] = "MemFree"; /* default if caller passes null */
FILE *fp = fopen(memfile,"r"); /* open memfile for read */
char memline[999]; /* read memfile line-by-line */
int nkbytes = (-1); /* #Kbytes, init for error */
/* ---
* read memfile until line with desired memtype found
* ----------------------------------------------------- */
if ( memtype == NULL ) memtype = deftype; /* caller wants default */
if ( fp == NULL ) goto end_of_job; /* but we can't get it */
while ( fgets(memline,512,fp) /* read next line */
!= NULL ) { /* quit at eof (or error) */
if ( strcasestr(memline,memtype) /* look for memtype in line */
!= NULL ) { /* found line with memtype */
char *delim = strchr(memline,':'); /* colon following MemType */
if ( delim != NULL ) /* NULL if file format error? */
nkbytes = atoi(delim+1); /* num after colon is #Kbytes */
break; } /* no need to read further */
} /* --- end-of-while(fgets()!=NULL) --- */
end_of_job: /* back to caller with nkbytes */
if ( fp != NULL ) fclose(fp); /* close /proc/meminfo file */
return ( nkbytes ); /* and return nkbytes to caller */
} /* --- end-of-function memfree() --- */
#if defined(MEMFREETEST)
int main ( int argc, char *argv[] ) {
char *memtype = ( argc>1? argv[1] : NULL );
int memfree();
printf ( " memfree(\"%s\") = %d Kbytes\n Have a nice day.\n",
(memtype==NULL?" ":memtype), memfree(memtype) );
} /* --- end-of-function main() --- */
#endif
malloc/calloc apparently use swap space to satisfy a request that exceeds available free memory.
Well, no.
Malloc/calloc use virtual memory. The "virtual" means that it's not real - it's an artificially constructed illusion made out of fakery and lies. Your entire process is built on these artificially constructed illusions - a thread is a virtual CPU, a socket is a virtual network connection, the C language is really a specification for a "C abstract machine", a process is a virtual computer (that implements the languages' abstract machine).
You're not supposed to look behind the magic curtain. You're not supposed to know that physical memory exists. The system doesn't hang - the illusion is just slower, but that's fine because the C abstract machine says nothing about how long anything is supposed to take and does not provide any performance guarantees.
More importantly; because of the illusion, software works. It doesn't crash because there's not enough physical memory. Failure means that it takes an infinite amount of time for software to complete successfully, and "an infinite amount of time" is many orders of magnitude worse than "slower because of swap space".
How to get malloc/calloc to fail if request exceeds free physical memory (i.e., don't use swap)
If you are going to look behind the magic curtain, you need to define your goals carefully.
For one example, imagine if your process has 123 MiB of code and there's currently 1000 MiB of free physical RAM; but (because the code is in virtual memory) only a tiny piece of the code is using real RAM (and the rest of the code is on disk because the OS/executable loader used memory mapped files to avoid wasting real RAM until it's actually necessary). You decide to allocate 1000 MiB of memory (and because the OS creating the illusion isn't very good, unfortunately this causes 1000 MiB of real RAM to be allocated). Next, you execute some more code, but the code you execute isn't in real memory yet, so the OS has to fetch the code from the file on the disk into physical RAM, but you consumed all of the physical RAM so the OS has to send some of the data to swap space.
For another example, imagine if your process has 1 MiB of code and 1234 MiB of data that was carefully allocated to make sure that everything fits in physical memory. Then a completely different process is started and it allocates 6789 MiB of memory for its code and data; so the OS sends all of your process' data to swap space to satisfy the other process that you have no control over.
EDIT
The problem here is that the OS providing the illusion is not very good. When you allocate a large amount of virtual memory with malloc() or calloc(); the OS should be able to use a tiny piece of real memory to lie to you and avoid consuming a large amount of real memory. Specifically (for most modern operating systems running on normal hardware); the OS should be able to fill a huge area of virtual memory with a single page full of zeros that is mapped many times (at many virtual addresses) as "read only", so that allocating a huge amount of virtual memory costs almost no physical RAM at all (until you write to the virtual memory, causing the OS to allocate the least physical memory needed to satisfy the modifications). Of course if you eventually do write to all of the allocated virtual memory, then you'll end up exhausting physical memory and using some swap space; but this will probably happen gradually and not all at once - many tiny delays scattered over a large period of time are far less likely to be noticed than a single huge delay.
With this in mind; I'd be tempted to try using mmap(..., MAP_ANONYMOUS, ...) instead of the (poorly implemented) malloc() or calloc(). This might mean that you have to deal with the possibility that the allocated virtual memory isn't guaranteed to be initialized to zeros, but (depending on what you're using the memory for) that's likely to be easy to work around.
Expanding on a comment I made to the original question:
If you want to disable swapping, use the swapoff command (sudo swapoff -a). I usually run my machine that way, to avoid it freezing when firefox does something it shouldn't. You can use setrlimit() (or the ulimit command) to set a maximum VM size, but that won't properly compensate for some other process suddenly deciding to be a memory hog (see above).
Even if you choose one of the above options, you should read the rest of this answer to see how to avoid unnecessary initialisation on the first call to calloc().
As for your precise test harness, it turns out that you are triggering an unfortunate exception to GNU calloc()'s optimisation.
Here's a comment (now deleted) I made to another answer, which turns out to not be strictly speaking accurate:
I checked the glibc source for the default gnu/linux malloc library, and verified that calloc() does not normally manually clear memory which has just been mmap'd. And malloc() doesn't touch the memory at all.
It turns out that I missed one exception to the calloc optimisation. Because of the way the GNU malloc implementation initialises the malloc system, the first call to calloc always uses memset() to set the newly-allocated storage to 0. Every other call to calloc() passes through the entire calloc logic, which avoids calling memset() on storage which has been freshly mmap'd.
So the following modification to the test program shows radically different behaviour:
#include <stdio.h>
#include <stdlib.h>
int main ( int argc, char *argv[] ) {
/* These three lines were added */
void* tmp = calloc(1000, 1); /* force initialization */
printf("Allocated 1000 bytes at %p\n", tmp);
free(tmp);
/* The rest is unchanged */
unsigned int nmalloc = (argc>1? atoi(argv[1]) : 10000000 ),
size = (argc>2? atoi(argv[2]) : (0) );
unsigned char *pmalloc = (size>0? calloc(nmalloc,size):malloc(nmalloc));
fprintf( stdout," %s malloc'ed %d elements of %d bytes each.\n",
(pmalloc==NULL? "UNsuccessfully" : "Successfully"),
nmalloc, (size>0?size:1) );
if ( pmalloc != NULL ) free(pmalloc);
}
Note that if you set MALLOC_PERTURB_ to a non-zero value, then it is used to initialise malloc()'d blocks, and forces calloc()'d blocks to be initialised to 0. That's used in the test below.
In the following, I used /usr/bin/time to show the number of page faults during execution. Pay attention to the number of minor faults, which are the result of the operating system zero-initialising a previously unreferenced page in an anonymous mmap'd region (and some other occurrences, like mapping a page already present in Linux's page cache). Also look at the resident set size and, of course, the execution time.
$ gcc -Og -ggdb -Wall -o mall mall.c
$ # A simple malloc completes instantly without page faults
$ /usr/bin/time ./mall 4000000000
Allocated 1000 bytes at 0x55b94ff56260
Successfully malloc'ed -294967296 elements of 1 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1600maxresident)k
0inputs+0outputs (0major+61minor)pagefaults 0swaps
$ # Unless we tell malloc to initialise memory
$ MALLOC_PERTURB_=35 /usr/bin/time ./mall 4000000000
Allocated 1000 bytes at 0x5648c2436260
Successfully malloc'ed -294967296 elements of 1 bytes each.
0.19user 1.23system 0:01.43elapsed 99%CPU (0avgtext+0avgdata 3907584maxresident)k
0inputs+0outputs (0major+976623minor)pagefaults 0swaps
# Same, with calloc. No page faults, instant completion.
$ /usr/bin/time ./mall 1000000000 4
Allocated 1000 bytes at 0x55e8257bb260
Successfully malloc'ed 1000000000 elements of 4 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1656maxresident)k
0inputs+0outputs (0major+62minor)pagefaults 0swaps
$ # Again, setting the magic malloc config variable changes everything
$ MALLOC_PERMUTE_=35 /usr/bin/time ./mall 1000000000 4
Allocated 1000 bytes at 0x5646f391e260
Successfully malloc'ed 1000000000 elements of 4 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1656maxresident)k
0inputs+0outputs (0major+62minor)pagefaults 0swaps
I've been reading a lot about memory allocation on the heap and how certain heap management allocators do it.
Say I have the following program:
#include<stdlib.h>
#include<stdio.h>
#include<unistd.h>
int main(int argc, char *argv[]) {
// allocate 4 gigabytes of RAM
void *much_mems = malloc(4294967296);
// sleep for 10 minutes
sleep(600);
// free teh ram
free(*much_mems);
// sleep some moar
sleep(600);
return 0;
}
Let's say for sake of argument that my compiler doesn't optimize out anything above, that I can actually allocate 4GiB of RAM, that the malloc() call returns an actual pointer and not NULL, that size_t can hold an integer as big as 4294967296 on my given platform, that the allocater implemented by the malloc call actually does allocate that amount of RAM in the heap. Pretend that the above code does exactly what it looks like it will do.
After the call to free executes, how does the kernel know that those 4 GiB of RAM are now eligible for use for other processes and for the kernel itself? I'm not assuming the kernel is Linux, but that would be a good example. Before the call to free, this process has a heap size of at least 4GiB, and afterward, does it still have that heap size?
How do modern operating systems allow userspace programs to return memory back to kernel space? Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available? And is it possible that my 4 GiB allocation will be non-contiguous?
Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available?
Yes.
A modern implementation of malloc on Linux will call mmap to allocate a large amount of memory. The kernel will find an unused virtual address, mark it as allocated, and return it. (The kernel may also return an error if there isn't enough free memory)
free would then call munmap to deallocate the memory, passing the address and size of the allocation.
On Windows, malloc will call VirtualAlloc and free will call VirtualFree.
On GNU/Linux with Glibc, large memory allocations, of more than a few hundred kilobytes, are handled by calling mmap. When the free function is invoked on this, the library knows that the memory was allocated this way (thanks to meta-data stored in a header). It simply calls unmap on it to release it. That's how the kernel knows; its mmap and unmap API is being used.
You can see these calls if you run strace on the program.
The kernel keeps track of all mmap-ed regions using a red-black tree. Given an arbitrary virtual address, it can quickly determine whether it lands in the mmap area, and which mapping, by performing a tree walk.
Before the call to free, this process has a heap size of at least 4GiB...
The C language does not define either "heap" or "stack". Before the call to free, this process has a chunk of 4 GB dynamically allocated memory...
and afterward, does it still have that heap size?
...and after the free(), access to that memory would be undefined behaviour, so for practical purposes, that dynamically allocated memory is no longer "there".
What the library does "under the hood" (e.g. caching, see below) is up to the library, and is subject to change without further notice. This could change with the amount of available physical memory, system load, runtime parameters, ...
How do modern operating systems allow userspace programs to return memory back to kernel space?
It's up to the standard library's implementation to decide (which, of course, has to talk to the operating system to actually, physically allocate / free memory).
Others have pointed out how certain, existing implementations do it. Other libraries, operating systems, and environments exist.
Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available?
Possibly. A common optimization done by library implementations is to "cache" free()d memory, so subsequent malloc() calls can be served without talking to the kernel (which is a costly operation). When, how much, and how long memory is cached this way is, you guessed it, implementation-defined.
And is it possible that my 4 GiB allocation will be non-contiguous?
The process will always "see" contiguous memory. In a system supporting virtual memory (i.e. "modern" desktop OS's like Linux or Windows), the physical memory might be non-contiguous, but the virtual addresses your process gets to see will be contiguous (or the malloc() would have failed if this requirement could not be serviced).
Again, other systems exist. You might be looking at a system that doesn't virtualize addresses (i.e. gives physical addresses to the process). You might be looking at a system that assigns a given amount of memory to a process on startup, serves any malloc() requests from that, and doesn't support the allocation of additional memory. And so on.
If we're using Linux as an example it uses mmap to allocate large chunks of memory. This means when you free it it gets umapped ie the kernel gets told that it can now unmap this memory. Read up on the brk and sbrk system calls. A good place to start would be here...
What does brk( ) system call do?
and here. The following post discusses how malloc is implemented which will give you a good idea what's happening under the covers...
How is malloc() implemented internally?
Doug Lea's malloc can be found here. It's well commented and public domain...
ftp://g.oswego.edu/pub/misc/malloc.c
malloc() and free() are kernel functions (system calls) . it is being called by the application to allocate and free memory on the heap.
application itself is not allocating/freeing memory .
the whole mechanism is executed at kernel level .
see the below heap implementation code
void *heap_alloc(uint32_t nbytes) {
heap_header *p, *prev_p; // used to keep track of the current unit
unsigned int nunits; // this is the number of "allocation units" needed by nbytes of memory
nunits = (nbytes + sizeof(heap_header) - 1) / sizeof(heap_header) + 1; // see how much we will need to allocate for this call
// check to see if the list has been created yet; start it if not
if ((prev_p = _heap_free) == NULL) {
_heap_base.s.next = _heap_free = prev_p = &_heap_base; // point at the base of the memory
_heap_base.s.alloc_sz = 0; // and set it's allocation size to zero
}
// now enter a for loop to find a block fo memory
for (p = prev_p->s.next;; prev_p = p, p = p->s.next) {
// did we find a big enough block?
if (p->s.alloc_sz >= nunits) {
// the block is exact length
if (p->s.alloc_sz == nunits)
prev_p->s.next = p->s.next;
// the block needs to be cut
else {
p->s.alloc_sz -= nunits;
p += p->s.alloc_sz;
p->s.alloc_sz = nunits;
}
_heap_free = prev_p;
return (void *)(p + 1);
}
// not enough space!! Try to get more from the kernel
if (p == _heap_free) {
// if the kernel has no more memory, return error!
if ((p = morecore()) == NULL)
return NULL;
}
}
}
this heap_alloc function uses morecore function which is implemented as below :
heap_header *morecore() {
char *cp;
heap_header *up;
cp = (char *)pmmngr_alloc_block(); // allocate more memory for the heap
// if cp is null we have no memory left
if (cp == NULL)
return NULL;
//vmmngr_mapPhysicalAddress(cp, (void *)_virt_addr); // and map it's virtual address to it's physical address
vmmngr_mapPhysicalAddress(vmmngr_get_directory(), _virt_addr, (uint32_t)cp, I86_PTE_PRESENT | I86_PTE_WRITABLE);
_virt_addr += BLOCK_SIZE; // tack on nu bytes to the virtual address; this will be our next allocation address
up = (heap_header *)cp;
up->s.alloc_sz = BLOCK_SIZE;
heap_free((void *)(up + 1));
return _heap_free;
}
as you can see this function is asking the physical memory manager to allocate a block :
cp = (char *)pmmngr_alloc_block();
and then map the allocated block into virtual memory :
vmmngr_mapPhysicalAddress(vmmngr_get_directory(), _virt_addr, (uint32_t)cp, I86_PTE_PRESENT | I86_PTE_WRITABLE);
as you can see , the whole story is being controlled by the heap manager in kernel level.
How do i know the amount of memory used . i.e RAM usage ?
int main()
{
int i=0;
for(i=0;i<100;i++)
{
printf("%d\n",i);
}
return 0;
}
I want to write a code which calculate the amount of memory used by this program . May be like-
int main()
{
int i=0;
for(i=0;i<100;i++)
{
printf("%d\n",i);
}
printf("Amount of memory consumed=%f",SOME_FUNCTION());
return 0;
}
The getrusage system call will return a handful of information for the current process, among which is the "resident set size":
struct rusage usage;
if (!getrusage(RUSAGE_SELF, &usage)) {
printf("Maximum resident set size (KB): %ld\n", usage.ru_maxrss);
} else {
perror("getrusage");
}
This size equates to the amount of memory that is physically wired to the process and not the entire size of the virtual address space, parts of which might be paged-out or never loaded.
It's not easy to check how much memory your program uses on Linux system. but most likely what you want to check is the value of VmRSS in /proc/[pid]/status (or second column of /proc/[pid]/statm). VmRSS ("resident set size") is the amount of memory your process currently uses.
Other than that you may be interested in VmSize from /proc/[pid]/status (or first column of /proc/[pid]/statm). This is total memory your process uses, including memory swapped out, memory used by shared libraries, memory-mapped resources (which, in general, don't consume real RAM).
To get PID of your process, use getpid(). From within your process you could also check /proc/self/status.
A simple approach will to create a wrapper function for memory allocation and freeing and call the wrapper and put memory usage information in that. This can only be used in case of dynamic memory allocaion. e.g
#define ALLOC 1
#define FREE 2
mem_op(void * pointer,int size,int operation)
{
switch(operation)
{
static int mem_used;
case ALLOC:
// call malloc or alloc
mem_used = mem_used+size;
break;
case FREE:
// call free
mem_used = mem_used-size;
break;
}
Consider a program which uses a large number of roughly page-sized memory regions (say 64 kB or so), each of which is rather short-lived. (In my particular case, these are alternate stacks for green threads.)
How would one best do to allocate these regions, such that their pages can be returned to the kernel once the region isn't in use anymore? The naïve solution would clearly be to simply mmap each of the regions individually, and munmap them again as soon as I'm done with them. I feel this is a bad idea, though, since there are so many of them. I suspect that the VMM may start scaling badly after a while; but even if it doesn't, I'm still interested in the theoretical case.
If I instead just mmap myself a huge anonymous mapping from which I allocate the regions on demand, is there a way to "punch holes" through that mapping for a region that I'm done with? Kind of like madvise(MADV_DONTNEED), but with the difference that the pages should be considered deleted, so that the kernel doesn't actually need to keep their contents anywhere but can just reuse zeroed pages whenever they are faulted again.
I'm using Linux, and in this case I'm not bothered by using Linux-specific calls.
I did a lot of research into this topic (for a different use) at some point. In my case I needed a large hashmap that was very sparsely populated + the ability to zero it every now and then.
mmap solution:
The easiest solution (which is portable, madvise(MADV_DONTNEED) is linux specific) to zero a mapping like this is to mmap a new mapping above it.
void * mapping = mmap(MAP_ANONYMOUS);
// use the mapping
// zero certain pages
mmap(mapping + page_aligned_offset, length, MAP_FIXED | MAP_ANONYMOUS);
The last call is performance wise equivalent to subsequent munmap/mmap/MAP_FIXED, but is thread safe.
Performance wise the problem with this solution is that the pages have to be faulted in again on a subsequence write access which issues an interrupt and a context change. This is only efficient if very few pages were faulted in in the first place.
memset solution:
After having such crap performance if most of the mapping has to be unmapped I decided to zero the memory manually with memset. If roughly over 70% of the pages are already faulted in (and if not they are after the first round of memset) then this is faster then remapping those pages.
mincore solution:
My next idea was to actually only memset on those pages that have been faulted in before. This solution is NOT thread-safe. Calling mincore to determine if a page is faulted in and then selectively memset them to zero was a significant performance improvement until over 50% of the mapping was faulted in, at which point memsetting the entire mapping became simpler (mincore is a system call and requires one context change).
incore table solution:
My final approach which I then took was having my own in-core table (one bit per page) that says if it has been used since the last wipe. This is by far the most efficient way since you will only be actually zeroing the pages in each round that you actually used. It obviously also is not thread safe and requires you to track which pages have been written to in user space, but if you need this performance then this is by far the most efficient approach.
I don't see why doing lots of calls to mmap/munmap should be that bad. The lookup performance in the kernel for mappings should be O(log n).
Your only options as it seems to be implemented in Linux right now is to punch holes in the mappings to do what you want is mprotect(PROT_NONE) and that is still fragmenting the mappings in the kernel so it's mostly equivalent to mmap/munmap except that something else won't be able to steal that VM range from you. You'd probably want madvise(MADV_REMOVE) work or as it's called in BSD - madvise(MADV_FREE). That is explicitly designed to do exactly what you want - the cheapest way to reclaim pages without fragmenting the mappings. But at least according to the man page on my two flavors of Linux it's not fully implemented for all kinds of mappings.
Disclaimer: I'm mostly familiar with the internals of BSD VM systems, but this should be quite similar on Linux.
As in the discussion in comments below, surprisingly enough MADV_DONTNEED seems to do the trick:
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
#include <unistd.h>
#include <err.h>
int
main(int argc, char **argv)
{
int ps = getpagesize();
struct rusage ru = {0};
char *map;
int n = 15;
int i;
if ((map = mmap(NULL, ps * n, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)) == MAP_FAILED)
err(1, "mmap");
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
printf("unnecessary printf to fault stuff in: %d %ld\n", map[0], ru.ru_minflt);
/* Unnecessary call to madvise to fault in that part of libc. */
if (madvise(&map[ps], ps, MADV_NORMAL) == -1)
err(1, "madvise");
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_NORMAL, before touching pages: %d %ld\n", map[0], ru.ru_minflt);
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_NORMAL, after touching pages: %d %ld\n", map[0], ru.ru_minflt);
if (madvise(map, ps * n, MADV_DONTNEED) == -1)
err(1, "madvise");
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_DONTNEED, before touching pages: %d %ld\n", map[0], ru.ru_minflt);
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_DONTNEED, after touching pages: %d %ld\n", map[0], ru.ru_minflt);
return 0;
}
I'm measuring ru_minflt as a proxy to see how many pages we needed to allocate (this isn't exactly true, but the next sentence makes it more likely). We can see that we get new pages in the third printf because the contents of map[0] are 0.