Does any operating system implement buffering for malloc()? - c

A lot of c/malloc()'s in a for/while/do can consume a lot of time so I am curious if any operating system buffers memory for fast mallocs.
I have been pondering if I could speed up malloc's by writing a "greedy" wrapper for malloc. E.g. when I ask for 1MB of memory the initial allocator would allocate 10MB and on the 2nd, 3rd, 4th etc... call the malloc function would simply return memory from the chunk first allocated the "normal" way. Of course if there is not enough memory available you would need to allocate a new greedy chunk of memory.
Somehow I think someone must have done this or something similar before. So my question is simply: Is this something that will speed up the memory allocation process significantly. (yes I could have tried it before asking the question but I am just to lazy to write such a thing if there is no need to do it)

All versions of malloc() do buffering of the type you describe to some extent - they will grab a bigger chunk than the current request and use the big chunk to satisfy multiple requests, but only up to some request size. This means that multiple requests for 16 bytes at a time will only require more memory from the o/s once every 50-100 calls, or something along those general lines.
What is less clear is what the boundary size is. It might well be that they allocate some relatively small multiple of 4 KiB at a time. Bigger requests - MiB size requests - will go back to the system for more memory every time that the request cannot be satisfied from what is in the free list. That threshold is typically considerably smaller than 1 MiB, though.
Some versions of malloc() allow you to tune their allocation characteristics, to greater or lesser extents. It has been a fertile area of research - lots of different systems. See Knuth 'The Art of Computer Programming' Vol 1 (Fundamental Algorithms) for one set of discussions.

As I was browsing the Google Chrome code some time ago, I found http://www.canonware.com/jemalloc/.
It is a free, general-purpose and scalable malloc implementation.
Apparrently, it is being used in numerous projects, as it usually outperforms standard malloc's implementations in many real-world scenarios (many small allocations instead of few big ones).
Definitely worth a look!

That technique is called Slab Allocator, and most operating systems support it, but i can't find information that is it available to userspace malloc, just for kernel allocations.
You can find the paper by Jeff Bonwick here, which describes the original technique on Solaris.

Google has a greedy implementation of malloc() that does roughly what you're thinking of. It has some drawbacks, but it's very fast in many use cases.

What you're saying is probably done, I don't really know. However, I don't know that the latency in buffering your malloc() at the system level would decrease latency much. You've still got to take the time to enter into priv. mode for a system call, potentially lock kernel-level structures (which means more system calls and WAITING for the locks), and things of that nature.
If you can write your own memory manager in user-space for your program and only call malloc() when you need more memory for your pool, you may likely see a decrease in latency.

Related

Looking for a custom memory allocator which allocates from within a large pre-allocated block of memory

I have a memory-heavy application which is supposed to run with low latency and with constant speed, but in practice it has poor performance during the first few seconds of startup. This appears to be because the initial memory accesses triggers page faults which have significant performance implications.
I would like to try preallocating a single large block of memory, paging it all in (via mlock() or just by touching each byte), and then using a custom malloc()/free() implementation to ensure that all further allocations are done from within this block.
I am aware of numerous custom memory allocators (TCMalloc, Hoard, jemalloc, etc) but it is not clear to me whether they can be backed by user-provided memory, or whether they always perform their internal allocations from the OS. Does anyone have any insight or recommendations here?
To be clear, I am not looking for a memory pooling system (which would be for reusing small objects). The custom implementation of malloc()/free() should be able to perform any size allocation while limiting fragmentation of its backing store and following other best practices.
Edit based on comments: I do not expect to make the system faster - I just want to move the slow part (allocation, initial page faults) to the start of the process, and then do the real computation work once the system is 'primed'.
Thanks!
A bit late to the party.
dlmalloc is one choice that can be backed by pre-allocated memory. You can find it here. You may just need to add some extra definitions in the beginning to force it to use your pre-allocated memory rather than call the system mmap, you can refer to the nice documentation at the beginning of the file.

Profiling resident memory usage and many page faults in C++ program on linux

I am trying to figure out why my resident memory for one version of a program ("new") is much higher (5x) than another version of the same program ("baseline"). The program is running on a Linux cluster with E5-2698 v3 CPUs and written in C++. The baseline is a multiprocess program, and the new one is a multithreaded program; they are both fundamentally doing the same algorithm, computation, and operating on the same input data, etc. In both, there are as many processes or threads as cores (64), with threads pinned to CPUs. I've done a fair amount of heap profiling using both Valgrind Massif and Heaptrack, and they show that the memory allocation is the same (as it should be). The RSS for both the baseline and new version of the program are larger than the LLC.
The machine has 64 cores (hyperthreads). For both versions, I straced relevant processes and found some interesting results. Here's the strace command I used:
strace -k -p <pid> -e trace=mmap,munmap,brk
Here are some details about the two versions:
Baseline Version:
64 processes
RES is around 13 MiB per process
using hugepages (2MB)
no malloc/free-related syscalls were made from the strace call listed above (more on this below)
top output
New Version
2 processes
32 threads per process
RES is around 2 GiB per process
using hugepages (2MB)
this version does a fair amount of memcpy calls of large buffers (25MB) with default settings of memcpy (which, I think, is supposed to use non-temporal stores but I haven't verified this)
in release and profile builds, many mmap and munmap calls were generated. Curiously, none were generated in debug mode. (more on that below).
top output (same columns as baseline)
Assuming I'm reading this right, the new version has 5x higher RSS in aggregate (entire node) and significantly more page faults as measured using perf stat when compared to the baseline version. When I run perf record/report on the page-faults event, it's showing that all of the page faults are coming from a memset in the program. However, the baseline version has that memset as well and there are no pagefaults due to it (as verified using perf record -e page-faults). One idea is that there's some other memory pressure for some reason that's causing the memset to page-fault.
So, my question is how can I understand where this large increase in resident memory is coming from? Are there performance monitor counters (i.e., perf events) that can help shed light on this? Or, is there a heaptrack- or massif-like tool that will allow me to see what is the actual data making up the RES footprint?
One of the most interesting things I noticed while poking around is the inconsistency of the mmap and munmap calls as mentioned above. The baseline version didn't generate any of those; the profile and release builds (basically, -march=native and -O3) of the new version DID issue those syscalls but the debug build of the new version DID NOT make calls to mmap and munmap (over tens of seconds of stracing). Note that the application is basically mallocing an array, doing compute, and then freeing that array -- all in an outer loop that runs many times.
It might seem that the allocator is able to easily reuse the allocated buffer from the previous outer loop iteration in some cases but not others -- although I don't understand how these things work nor how to influence them. I believe allocators have a notion of a time window after which application memory is returned to the OS. One guess is that in the optimized code (release builds), vectorized instructions are used for the computation and it makes it much faster. That may change the timing of the program such that the memory is returned to the OS; although I don't see why this isn't happening in the baseline. Maybe the threading is influencing this?
(As a shot-in-the-dark comment, I'll also say that I tried the jemalloc allocator, both with default settings as well as changing them, and I got a 30% slowdown with the new version but no change on the baseline when using jemalloc. I was a bit surprised here as my previous experience with jemalloc was that it tends to produce some some speedup with multithreaded programs. I'm adding this comment in case it triggers some other thoughts.)
In general: GCC can optimize malloc+memset into calloc which leaves pages untouched. If you only actually touch a few pages of a large allocation, that not happening could account for a big diff in page faults.
Or does the change between versions maybe let the system use transparent hugepages differently, in a way that happens to not be good for your workload?
Or maybe just different allocation / free is making your allocator hand pages back to the OS instead of keeping them in a free list. Lazy allocation means you get a soft page fault on the first access to a page after getting it from the kernel. strace to look for mmap / munmap or brk system calls.
In your specific case, your strace testing confirms that your change led to malloc / free handing pages back to the OS instead of keeping them on a free list.
This fully explains the extra page faults. A backtrace of munmap calls could identify the guilty free calls. To fix it, see https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-Tunables.html / http://man7.org/linux/man-pages/man3/mallopt.3.html, especially M_MMAP_THRESHOLD (perhaps raise it to get glibc malloc not to use mmap for your arrays?). I haven't played with the parameters before. The man page mentions something about a dynamic mmap threshold.
It doesn't explain the extra RSS; are you sure you aren't accidentally allocating 5x the space? If you aren't, perhaps better alignment of the allocation lets the kernel use transparent hugepages where it didn't before, maybe leading to wasting up to 1.99 MiB at the end of an array instead of just under 4k? Or maybe Linux wouldn't use a hugepage if you only allocated the first couple of 4k pages past a 2M boundary.
If you're getting the page faults in memset, I assume these arrays aren't sparse and that you are touching every element.
I believe allocators have a notion of a time window after which application memory is returned to the OS
It would be possible for an allocator to check the current time every time you call free, but that's expensive so it's unlikely. It's also very unlikely that they use a signal handler or separate thread to do a periodic check of free-list size.
I think glibc just uses a size-based heuristic that it evaluates on every free. As I said, the man page mentions something about heuristics.
IMO actually tuning malloc (or finding a different malloc implementation) that's better for your situation should probably be a different question.

Use of malloc in Real Time application

We had an issue with one of our real time application. The idea was to run one of the threads every 2ms (500 Hz). After the application ran for half or an hour so.. we noticed that the thread is falling behind.
After a few discussions, people complain about the malloc allocations in the real time thread or root cause is malloc allocations.
I am wondering that, is it always a good idea to avoid all dynamic memory allocations in the real time threads?
Internet has very few resource on this ? If you can point to some discussion that would we great too..
Thanks
First step is to profile the code and make sure you understand exactly where the bottleneck is. People are often bad at guessing bottlenecks in code, and you might be surprised with the findings. You can simply instrument several parts of this routine yourself and dump min/avg/max durations in regular intervals. You want to see the worst case (max), and if the average duration increases as the time goes by.
I doubt that malloc will take any significant portion of these 2ms on a reasonable microcontroller capable of running Linux; I'd say it's more likely you would run out of memory due to fragmentation, than having performance issues. If you have any other syscalls in your function, they will easily take an order of magnitude more than malloc.
But if malloc is really the problem, depending on how short-lived your objects are, how much memory you can afford to waste, and how much your requirements are known in advance, there are several approaches you can take:
General purpose allocation (malloc from your standard library, or any third party implementation): best approach if you have "more than enough" RAM, many short-lived objects, and no strict latency requirements
PROS: works for any object size out of the box, familiar interface, memory is shared dynamically, no need to "plan ahead" if memory is not an issue
CONS: slight performance penalty during allocation and/or deallocation, memory fragmentation issues when doing lots of allocations/deallocations of objects of different sizes, whether a run-time allocation will fail is less deterministic and cannot be easily mitigated at runtime
Memory pool: best approach in most cases where memory is limited, low latency is required, and the object needs to live longer than a single block scope
PROS: allocation/deallocation time is guaranteed to be O(1) in any reasonable implementation, does not suffer from fragmentation, easier to plan its size in advance, failure to allocate at run-time is (likely) easier to mitigate
CONS: works for a single specific object size - memory is not shared between other parts of the program, requires a planning for the right size of the pool or risking potential waste of memory
Stack based (automatic-duration) objects: best for smaller, short-lived objects (single block scope)
PROS: allocation and deallocation is done automatically, allows having optimum usage of RAM for the object's lifetime, there are tools which can sometimes do a static analysis of your code and estimate the stack size
CONS: objects limited to a single block scope - cannot share objects between interrupt invocations
Individual statically allocated objects: best approach for long lived objects
PROS: no allocation whatsoever - all needed objects exist throughout the application life-cycle, no problems with allocation/deallocation
CONS: wastes memory if the objects should be short-lived
Even if you decide to go for memory pools all over the program, make sure you add profiling/instrumentation to your code. And then leave it there forever to see how the performance changes over time.
Being a realtime software engineer in the aerospace industry we see this question a lot. Even within our own engineers, we see that software engineers attempt to use non-realtime programming techniques they learned elsewhere or to use open-source code in their programs. Never allocate from the heap during realtime. One of our engineers created a tool that intercepts the malloc and records the overhead. You can see in the numbers that you cannot predict when the allocation attempt will take a long time. Even on very high end computers (72 cores, 256 GB RAM servers) running a realtime hybrid of Linux we record mallocs taking 100's of milliseconds. It is a system call which is cross-ring, so high overhead, and you don't know when you will get hit by garbage collection, or when it decides it must request another large chunk or memory for the task from the OS.

overhead for an empty heap arena

My tools are Linux, gcc and pthreads. When my program calls new/delete from several threads, and when there is contention for the heap, 'arena's are created (see the following link for reference http://www.bozemanpass.com/info/linux/malloc/Linux_Heap_Contention.html). My program runs 24x7, and arenas are still occasionally being created after 2 weeks. I think there may eventually be as many arenas as threads. ps(1) shows alarming memory consumption, but I suspect that only a small portion of it is actually mapped.
What is the 'overhead' for an empty arena? (How much more memory per arena is used than if all allocation was confined to the traditional heap? )
Is there any way to force the creation in advance of n arenas? Is there any way to force the destruction of empty arenas?
struct malloc_state (aka mstate, aka arena descriptor) have size
glibc-2.2
(256+18)*4 bytes =~ 1 KB for 32 bit mode and ~2 KB for 64 bit mode.
glibc-2.3
(256+256/32+11+NFASTBINS)*4 =~ 1.1-1.2 KB in 32bit and 2.4-2.5 KB for 64bit
See glibc-x.x.x/malloc/malloc.c file, struct malloc_state
Destruction of arenas... I don't know yet, but there is such text (briefly - it says NO to the possibility of destruction/trimming memory ) from analysis http://www.citi.umich.edu/techreports/reports/citi-tr-00-5.pdf from 2000 (*a bit outdated). Please name your glibc version.
Ptmalloc maintains a linked list of subheaps. To re-
duce lock contention, ptmalloc searchs for the first
unlocked subheap and grabs memory from it to fulfill
a malloc() request. If ptmalloc doesn’t find an
unlocked heap, it creates a new one. This is a simple
way to grow the number of subheaps as appropriate
without adding complicated schemes for hashing on
thread or processor ID, or maintaining workload sta-
tistics. However, there is no facility to shrink the sub-
heap list and nothing stops the heap list from growing
without bound.
from malloc.c (glibc 2.3.5) line 1546
/*
-------------------- Internal data structures --------------------
All internal state is held in an instance of malloc_state defined
below.
...
Beware of lots of tricks that minimize the total bookkeeping space
requirements. **The result is a little over 1K bytes** (for 4byte
pointers and size_t.)
*/
The same result as I got for 32-bit mode. The result is a little over 1K bytes
Consider using of TCmalloc form google-perftools. It just better suited for threaded and long-living applications. And it is very FAST.
Take a look on http://goog-perftools.sourceforge.net/doc/tcmalloc.html especially on graphics (higher is better). Tcmalloc is twice better than ptmalloc.
In our application the main cost of multiple arenas has been "dark" memory. Memory allocated by the OS, which we don't have any references to.
The pattern you can see is
Thread X goes goes to alloc, hits a collision, creates a new arena.
Thread X makes some large allocations.
Thread X makes some small allocation(s).
Thread X stops allocating.
Large allocations are freed. But the whole arena at the high water mark of the last currently active allocation is still using up VMEM, and other threads won't use this arena unless they hit contention in the main arena.
Basically it's a contributor to "memory fragmentation", since there are multiple places memory can be available, but needing to grow an arena is not a reason to look in other arenas. At least I think that's the cause, the point is your application can end up with a bigger VM footprint than you think it should have. This mostly hits you if you have limited swap, since as you say most of this ends up paged out.
Our (memory hungry) application can have 10s of percent of memory "wasted" in this way, and it can really bite in some situations.
I'm not sure why you would want to create empty arenas. If allocations and frees are in the same thread as each other, then I think over time you will tend to all of them being in the same thread-specific arena with no contention. You may have some small blips while you get there, so maybe that's a reason.

how to fix a memory size for my application in C?

I would like to allocate a fixed memory for my application (developed using C). Say my application should not cross 64MB of memory occupation. And also i should avoid to use more CPU usage. How it is possible?
Regards
Marcel.
Under unix: "ulimit -d 64M"
One fairly low-tech way I could ensure of not crossing a maximum threshold of memory in your application would be to define your own special malloc() function which keeps count of how much memory has been allocated, and returns a NULL pointer if the threshold has been exceeded. This would of course rely on you checking the return value of malloc() every time you call it, which is generally considered good practice anyway because there is no guarantee that malloc() will find a contiguous block of memory of the size that you requested.
This wouldn't be foolproof though, because it probably won't take into account memory padding for word alignment, so you'd probably end up reaching the 64MB memory limit long before your function reports that you have reached it.
Also, assuming you are using Win32, there are probably APIs that you could use to get the current process size and check this within your custom malloc() function. Keep in mind that adding this checking overhead to your code will most likely cause it to use more CPU and run a lot slower than normal, which leads nicely into your next question :)
And also i should avoid to use more
CPU usage.
This is a very general question and there is no easy answer. You could write two different programs which essentially do the same thing, and one could be 100 times more CPU intensive than another one due to the algorithms that have been used. The best technique is to:
Set some performance benchmarks.
Write your program.
Measure to see whether it reaches your benchmarks.
If it doesn't reach your benchmarks, optimise and go to step (3).
You can use profiling programs to help you work out where your algorithms need to be optimised. Rational Quantify is an example of a commercial one, but there are many free profilers out there too.
If you are on POSIX, System V- or BSD-derived system, you can use setrlimit() with resource RLIMIT_DATA - similar to ulimit -d.
Also take a look at RLIMIT_CPU resource - it's probably what you need (similar to ulimit -t)
Check man setrlimit for details.
For CPU, we've had a very low-priority task ( lower than everything else ) that does nothing but count. Then you can see how often that task gets to run, and you know if the rest of your processes are consuming too much CPU. This approach doesn't work if you want to limit your process to 10% while other processes are running, but if you want to ensure that you have 50% CPU free then it works fine.
For memory limitations you are either stuck implementing your own layer on top of malloc, or taking advantage of your OS in some way. On Unix systems ulimit is your friend. On VxWorks I bet you could probably figure out a way to take advantage of the task control block to see how much memory the application is using... if there isn't already a function for that. On Windows you could probably at least set up a monitor to report if your application does go over 64 MB.
The other question is: what do you do in response? Should your application crash if it exceeds 64Mb? Do you want this just as a guide to help you limit yourself? That might make the difference between choosing an "enforcing" approach versus a "monitor and report" approach.
Hmm; good question. I can see how you could do this for memory allocated off the heap, using a custom version of malloc and free, but I don't know about enforcing it on the stack too.
Managing the CPU is harder still...
Interesting.

Resources