I'm trying to optimize my dynamic memory usage. The thing is that I initially allocate some amount of memory for the data I get from a socket. Then, on the new data arrival I'm reallocating memory so the newly arrived part will fit into the local buffer. After some poking around I've found that malloc actually allocates a greater block than requested. In some cases significantly greater; here comes some debug info from malloc_usable_size(ptr):
requested 284 bytes, allocated 320 bytes
requested 644 bytes, reallocated 1024 bytes
It's well known that malloc/realloc are expensive operations. In most cases newly arrived data will fit into a previously allocated block (at least when I requested 644 byes and get 1024 instead), but I have no idea how I can figure that out.
The trouble is that malloc_usable_size should not be relied upon (as described in manual) and if the program requested 644 bytes and malloc allocated 1024, the excess 644 bytes may be overwritten and can not be used safely. So, using malloc for a given amount of data and then use malloc_usable_size to figure out how many bytes were really allocated isn't the way to go.
What I want is to know the block grid before calling malloc, so I will request exactly the maximum amount of bytes greater then I need, store allocated size and on the realloc check if I really need to realloc, or if the previously allocated block is fine just because it's greater.
In other words, if I were to request 644 bytes, and malloc actually gave me 1024, I want to have predicted that and requested 1024 instead.
Depending on your particular implementation of libc you will have different behaviour. I have found in most cases two approaches to do the trick:
Use the stack, this is not always feasible, but C allows VLAs on the stack and is the most effective if you don't intend to pass your buffer to an external thread
while (1) {
char buffer[known_buffer_size];
read(fd, buffer, known_buffer_size);
// use buffer
// released at the end of scope
}
In Linux you can make excellent use of mremap which can enlarge/shrink memory with zero-copy guaranteed. It may move your VM mapping though. Only problem here is that it only works in chunks of system page size sysconf(_SC_PAGESIZE) which is usually 0x1000.
void * buffer = mmap(NULL, init_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
while(1) {
// if needs remapping
{
// zero copy, but involves a system call
buffer = mremap(buffer, new_size, MREMAP_MAYMOVE);
}
// use buffer
}
munmap(buffer, current_size);
OS X has similar semantics to Linux's mremap through the Mach vm_remap, it's a little more compilcated though.
void * buffer = mmap(NULL, init_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
mach_port_t this_task = mach_task_self();
while(1) {
// if needs remapping
{
// zero copy, but involves a system call
void * new_address = mmap(NULL, new_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
vm_prot_t cur_prot, max_prot;
munmap(new_address, current_size); // vm needs to be empty for remap
// there is a race condition between these two calls
vm_remap(this_task,
&new_address, // new address
current_size, // has to be page-aligned
0, // auto alignment
0, // remap fixed
this_task, // same task
buffer, // source address
0, // MAP READ-WRITE, NOT COPY
&cur_prot, // unused protection struct
&max_prot, // unused protection struct
VM_INHERIT_DEFAULT);
munmap(buffer, current_size); // remove old mapping
buffer = new_address;
}
// use buffer
}
The short answer is that the standard malloc interface does not provide the information you are looking for. To use the information breaks the abstraction provided.
Some alternatives are:
Rethink your usage model. Perhaps pre-allocate a pool of buffers at start, filling them as you go. Unfortunately this could complicate your program more than you would like.
Use a different memory allocation library that does provide the needed interface. Different libraries provide different tradeoffs in terms of fragmentation, max run time, average run time, etc.
Use your OS memory allocation API. These are often written to be efficient, but will generally require a system call (unlike a user-space library).
In my professional code, I often take advantage of the actual size allocated by malloc()[etc], rather than the requested size. This is my function for determining the actual allocation size0:
int MM_MEM_Stat(
void *I__ptr_A,
size_t *_O_allocationSize
)
{
int rCode = GAPI_SUCCESS;
size_t size = 0;
/*-----------------------------------------------------------------
** Validate caller arg(s).
*/
#ifdef __linux__ // Not required for __APPLE__, as alloc_size() will
// return 0 for non-malloc'ed refs.
if(NULL == I__ptr_A)
{
rCode=EINVAL;
goto CLEANUP;
}
#endif
/*-----------------------------------------------------------------
** Calculate the size.
*/
#if defined(__APPLE__)
size=malloc_size(I__ptr_A);
#elif defined(__linux__)
size=malloc_usable_size(I__ptr_A);
#else
!##$%
#endif
if(0 == size)
{
rCode=EFAULT;
goto CLEANUP;
}
/*-----------------------------------------------------------------
** Return requested values to caller.
*/
if(_O_allocationSize)
*_O_allocationSize = size;
CLEANUP:
return(rCode);
}
I did some sore research and found two interesting things about malloc realization in Linux and FreeBSD:
1) in Linux malloc increment blocks linearly in 16 byte steps, at least up to 8K, so no optimization needed at all, it's just not reasonable;
2) in FreeBSD situation is different, steps are bigger and tend to grow up with requested block size.
So, any kind of optimization is needed only for FreeBSD as Linux allocates blocks with a very tiny steps and it's very unlikely to receive less then 16 bytes of data from socket.
Related
In mallinfo structure there are two fields hblks and hblkhd. The man documentation says that they are responsible for the number of blocks allocated by mmap and the total number of bytes. But when I run next code
void * ptr = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
*(int *) ptr = 10;
Fields hblks and hblkhd are also zero. While the total number of free bytes in the blocks decreases. Could you please explain why this behavior is observed?
I also tried to allocate all free space and use mmap after it. But in this situation fields also equal to zero
Compiler: gcc 9.4.0
OS: Ubuntu 20.04.1
I did some experiments and they led me to the conclusion that this field is filled only when mmap occurred when calling malloc. A normal mmap call doesn't show up in this statistic, which is logical, because this is a system call, and the statistic is collected in user-space
I have a PCIe endpoint device connected to the host. The ep's (endpoints) 512MB BAR is mmapped and memcpy is used to transfer data. Memcpy is quite slow (~2.5s). When I don't map all of the BAR (100bytes), but run memcpy for the full 512MB, I get a segfault within 0.5s, however when reading back the end of the BAR, the data shows the correct data. Meaning that the data reads the same as if I did mmap the whole BAR space.
How is the data being written and why is it so much faster than doing it the correct way (without the segfault)?
Code to map the whole BAR (takes 2.5s):
fd = open(filename, O_RDWR | O_SYNC)
map_base = mmap(NULL, 536870912, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
int rand_fd = open(infile, O_RDONLY);
rand_base = mmap(0, 536870912, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, 536870912);
if(munmap(map_base, map_size) == -1)
{
PRINT_ERROR;
}
close(fd);
Code to map only 100 bytes (takes 0.5s):
fd = open(filename, O_RDWR | O_SYNC)
map_base = mmap(NULL, 100, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
int rand_fd = open(infile, O_RDONLY);
rand_base = mmap(0, 536870912, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, 536870912);
if(munmap(map_base, map_size) == -1)
{
PRINT_ERROR;
}
close(fd);
To check the written data, I am using pcimem
https://github.com/billfarrow/pcimem
Edit: I was being dumb while consistent data was being 'written' after the segfault, it was not the data that it should have been. Therefore my conclusion that memcpy was completing after the segfault was false. I am accepting the answer as it provided me useful information.
Assuming filename is just an ordinary file (to save the data), leave off O_SYNC. It will just slow things down [possibly, a lot].
When opening the BAR device, consider using O_DIRECT. This may minimize caching effects. That is, if the BAR device does its own caching, eliminate caching by the kernel, if possible.
How is the data being written and why is it so much faster than doing it the correct way (without the segfault)?
The "short" mmap/read is not working. The extra data comes from the prior "full" mapping. So, your test isn't valid.
To ensure consistent results, do unlink on the output file. Do open with O_CREAT. Then, use ftruncate to extend the file to the full size.
Here is some code to try:
#define SIZE (512 * 1024 * 1024)
// remove the output file
unlink(filename);
// open output file (create it)
int ofile_fd = open(filename, O_RDWR | O_CREAT,0644)
// prevent segfault by providing space in the file
ftruncate(ofile_fd,SIZE);
map_base = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, ofile_fd, 0);
// use O_DIRECT to minimize caching effects when accessing the BAR device
#if 0
int rand_fd = open(infile, O_RDONLY);
#else
int rand_fd = open(infile, O_RDONLY | O_DIRECT);
#endif
rand_base = mmap(0, SIZE, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, SIZE);
if (munmap(map_base, map_size) == -1) {
PRINT_ERROR;
}
// close the output file
close(ofile_fd);
Depending upon the characteristics of the BAR device, to minimize the number of PCIe read/fetch/transaction requests, it may be helpful to ensure that it is being accessed as 32 bit (or 64 bit) elements.
Does the BAR space allow/support/encourage access as "ordinary" memory?
Usually, memcpy is smart enough to switch to "wide" memory access automatically (if memory addresses are aligned--which they are here). That is, memcpy will automatically use 64 bit fetches, with movq or possibly by using some XMM instructions, such as movdqa
It would help to know exactly which BAR device(s) you have. The datasheet/appnote should give enough information.
UPDATE:
Thanks for the sample code. Unfortunately, aarch64-gcc gives 'O_DIRECT undeclared' for some reason. Without using that flag, the speed is the same as my original code.
Add #define _GNU_SOURCE above any #include to resolve O_DIRECT
The PCIe device is an FPGA that we are developing. The bitstream is currently the Xilinx DMA example code. The BAR is just 512MB of memory for the system to R/W to. –
userYou
Serendipitously, my answer was based on my experience with access to the BAR space of a Xilinx FPGA device (it's been a while, circa 2010).
When we were diagnosing speed issues, we used a PCIe bus analyzer. This can show the byte width of the bus requests the CPU has requested. It also shows the turnaround time (e.g. Bus read request time until data packet from device is returned).
We also had to adjust the parameters in the PCIe config registers (e.g. transfer size, transaction replay) for the device/BAR. This was trial-and-error and we (I) tried some 27 different combinations before deciding on the optimum config
On an unrelated arm system (e.g. nVidia Jetson) about 3 years ago, I had to do memcpy to/from the GPU memory. It may have just been the particular cross-compiler I was using, but the disassembly of memcpy showed that it only used bytewide transfers. That is, it wasn't as smart as its x86 counterpart. I wrote/rewrote a version that used unsigned long long [and/or unsigned __int128] transfers. This sped things up considerably. See below.
So, you may wish to disassemble the generated memcpy code. Either the library function and/or code that it may inline into your function.
Just a thought ... If you're just wanting a bulk transfer, you may wish to have the device driver for the device program the DMA engine on the FPGA. This might be handled more effectively with a custom ioctl call to the device driver that accepts a custom struct describing the desired transfer (vs. read or mmap from userspace).
Are you writing a custom device driver for the device? Or, are you just using some generic device driver?
Here's what I had to do to get a fast memcpy on arm. It generates ldp/stp asm instructions.
// qcpy.c -- fast memcpy
#include <string.h>
#include <stddef.h>
#ifndef OPT_QMEMCPY
#define OPT_QMEMCPY 128
#endif
#ifndef OPT_QCPYIDX
#define OPT_QCPYIDX 1
#endif
// atomic type for qmemcpy
#if OPT_QMEMCPY == 32
typedef unsigned int qmemcpy_t;
#elif OPT_QMEMCPY == 64
typedef unsigned long long qmemcpy_t;
#elif OPT_QMEMCPY == 128
typedef unsigned __int128 qmemcpy_t;
#else
#error qmemcpy.c: unknown/unsupported OPT_QMEMCPY
#endif
typedef qmemcpy_t *qmemcpy_p;
typedef const qmemcpy_t *qmemcpy_pc;
// _qmemcpy -- fast memcpy
// RETURNS: number of bytes transferred
size_t
_qmemcpy(qmemcpy_p dst,qmemcpy_pc src,size_t size)
{
size_t cnt;
size_t idx;
cnt = size / sizeof(qmemcpy_t);
size = cnt * sizeof(qmemcpy_t);
if (OPT_QCPYIDX) {
for (idx = 0; idx < cnt; ++idx)
dst[idx] = src[idx];
}
else {
for (; cnt > 0; --cnt, ++dst, ++src)
*dst = *src;
}
return size;
}
// qmemcpy -- fast memcpy
void
qmemcpy(void *dst,const void *src,size_t size)
{
size_t xlen;
// use fast memcpy for aligned size
if (OPT_QMEMCPY > 0) {
xlen = _qmemcpy(dst,src,size);
src += xlen;
dst += xlen;
size -= xlen;
}
// copy remainder with ordinary memcpy
if (size > 0)
memcpy(dst,src,size);
}
UPDATE #2:
Speaking of serendipity, I am using a Jetson Orin. That is very interesting about the byte-wise behavior.
Just a thought ... If you have a Jetson in the same system as the FPGA, you might get DMA action by judicious use of cuda
Due to requirements, I cannot use any custom kernel modules so I am trying to do it all in userspace.
That is a harsh mistress to serve ... With custom H/W, it is almost axiomatic that you can have a custom device driver. So, the requirement sounds like a marketing/executive one rather than a technical one. If it's something like not being able to ship a .ko file because you don't know the target kernel version, it is possible to ship the driver as a .o and defer the .ko creation to the install script.
We want to use the DMA engine, but I am hiking up the learning curve on this one. We are using DMA in the FPGA, but I thought that as long as we could write to the address specified in the dtb, that meant the DMA engine was set up and working. Now I'm wondering if I have completely misunderstood that part. –
userYou
You probably will not get DMA doing that. If you start the memcpy, how does the DMA engine know the transfer length?
You might have better luck using read/write vs mmap to get DMA going, depending upon the driver.
But, if it were me, I'd keep the custom driver option open:
If you have to tweak/modify the BAR config registers on driver/system startup, I can't recall if it's even possible to map the config registers to userspace.
When doing mmap, the device may be treated as the "backing store" for the mapping. That is, there is still an extra layer of kernel buffering [just like there is when mapping an ordinary file]. The device memory is only updated periodically from the kernel [buffer] memory.
A custom driver can set up a [guaranteed] direct mapping, using some trickery that only the kernel/driver has access to.
Historical note:
When I last worked with the Xilinx FPGA (12 years ago), the firmware loader utility (provided by Xilinx in both binary and source form), would read in bytes from the firmware/microcode .xsvf file used (e.g.) fscanf(fi,"%c",&myint) to get the bytes.
This was horrible. I refactored the utility to fix that and the processing of the state machine and reduced the load time from 15 minutes to 45 seconds.
Hopefully, Xilinx has fixed the utility by now.
malloc/calloc apparently use swap space to satisfy a request that exceeds available free memory. And that pretty much hangs the system as the disk-use light remains constantly on. After it happened to me, and I wasn't immediately sure why, I wrote the following 5-line test program to check that this is indeed why the system was hanging,
/* --- test how many bytes can be malloc'ed successfully --- */
#include <stdio.h>
#include <stdlib.h>
int main ( int argc, char *argv[] ) {
unsigned int nmalloc = (argc>1? atoi(argv[1]) : 10000000 ),
size = (argc>2? atoi(argv[2]) : (0) );
unsigned char *pmalloc = (size>0? calloc(nmalloc,size):malloc(nmalloc));
fprintf( stdout," %s malloc'ed %d elements of %d bytes each.\n",
(pmalloc==NULL? "UNsuccessfully" : "Successfully"),
nmalloc, (size>0?size:1) );
if ( pmalloc != NULL ) free(pmalloc);
} /* --- end-of-function main() --- */
And that indeed hangs the system if the product of your two command-line args exceeds physical memory. Easiest solution is some way whereby malloc/calloc automatically just fail. Harder and non-portable was to write a little wrapper that popen()'s a free command, parses the output, and only calls malloc/calloc if the request can be satisfied by the available "free" memory, maybe with a little safety factor built in.
Is there any easier and more portable way to accomplish that? (Apparently similar to this question can calloc or malloc be used to allocate ONLY physical memory in OSX?, but I'm hoping for some kind of "yes" answer.)
E d i t--------------
Decided to follow Tom's /proc/meminfo suggestion. That is, rather than popen()'ing "free", just directly parse the existing and easily-parsible /proc/meminfo file. And then, a one-line macro of the form
#define noswapmalloc(n) ( (n) < 1000l*memfree(NULL)/2? malloc(n) : NULL )
finishes the job. memfree(), shown below, isn't as portable as I'd like, but can easily and transparently be replaced by a better solution if/when the need arises, which isn't now.
#include <stdio.h>
#include <stdlib.h>
#define _GNU_SOURCE /* for strcasestr() in string.h */
#include <string.h>
char *strcasestr(); /* non-standard extension */
/* ==========================================================================
* Function: memfree ( memtype )
* Purpose: return number of Kbytes of available memory
* (as reported in /proc/meminfo)
* --------------------------------------------------------------------------
* Arguments: memtype (I) (char *) to null-terminated, case-insensitive
* (sub)string matching first field in
* /proc/meminfo (NULL uses MemFree)
* --------------------------------------------------------------------------
* Returns: ( int ) #Kbytes of memory, or -1 for any error
* --------------------------------------------------------------------------
* Notes: o
* ======================================================================= */
/* --- entry point --- */
int memfree ( char *memtype ) {
/* ---
* allocations and declarations
* ------------------------------- */
static char memfile[99] = "/proc/meminfo"; /* linux standard */
static char deftype[99] = "MemFree"; /* default if caller passes null */
FILE *fp = fopen(memfile,"r"); /* open memfile for read */
char memline[999]; /* read memfile line-by-line */
int nkbytes = (-1); /* #Kbytes, init for error */
/* ---
* read memfile until line with desired memtype found
* ----------------------------------------------------- */
if ( memtype == NULL ) memtype = deftype; /* caller wants default */
if ( fp == NULL ) goto end_of_job; /* but we can't get it */
while ( fgets(memline,512,fp) /* read next line */
!= NULL ) { /* quit at eof (or error) */
if ( strcasestr(memline,memtype) /* look for memtype in line */
!= NULL ) { /* found line with memtype */
char *delim = strchr(memline,':'); /* colon following MemType */
if ( delim != NULL ) /* NULL if file format error? */
nkbytes = atoi(delim+1); /* num after colon is #Kbytes */
break; } /* no need to read further */
} /* --- end-of-while(fgets()!=NULL) --- */
end_of_job: /* back to caller with nkbytes */
if ( fp != NULL ) fclose(fp); /* close /proc/meminfo file */
return ( nkbytes ); /* and return nkbytes to caller */
} /* --- end-of-function memfree() --- */
#if defined(MEMFREETEST)
int main ( int argc, char *argv[] ) {
char *memtype = ( argc>1? argv[1] : NULL );
int memfree();
printf ( " memfree(\"%s\") = %d Kbytes\n Have a nice day.\n",
(memtype==NULL?" ":memtype), memfree(memtype) );
} /* --- end-of-function main() --- */
#endif
malloc/calloc apparently use swap space to satisfy a request that exceeds available free memory.
Well, no.
Malloc/calloc use virtual memory. The "virtual" means that it's not real - it's an artificially constructed illusion made out of fakery and lies. Your entire process is built on these artificially constructed illusions - a thread is a virtual CPU, a socket is a virtual network connection, the C language is really a specification for a "C abstract machine", a process is a virtual computer (that implements the languages' abstract machine).
You're not supposed to look behind the magic curtain. You're not supposed to know that physical memory exists. The system doesn't hang - the illusion is just slower, but that's fine because the C abstract machine says nothing about how long anything is supposed to take and does not provide any performance guarantees.
More importantly; because of the illusion, software works. It doesn't crash because there's not enough physical memory. Failure means that it takes an infinite amount of time for software to complete successfully, and "an infinite amount of time" is many orders of magnitude worse than "slower because of swap space".
How to get malloc/calloc to fail if request exceeds free physical memory (i.e., don't use swap)
If you are going to look behind the magic curtain, you need to define your goals carefully.
For one example, imagine if your process has 123 MiB of code and there's currently 1000 MiB of free physical RAM; but (because the code is in virtual memory) only a tiny piece of the code is using real RAM (and the rest of the code is on disk because the OS/executable loader used memory mapped files to avoid wasting real RAM until it's actually necessary). You decide to allocate 1000 MiB of memory (and because the OS creating the illusion isn't very good, unfortunately this causes 1000 MiB of real RAM to be allocated). Next, you execute some more code, but the code you execute isn't in real memory yet, so the OS has to fetch the code from the file on the disk into physical RAM, but you consumed all of the physical RAM so the OS has to send some of the data to swap space.
For another example, imagine if your process has 1 MiB of code and 1234 MiB of data that was carefully allocated to make sure that everything fits in physical memory. Then a completely different process is started and it allocates 6789 MiB of memory for its code and data; so the OS sends all of your process' data to swap space to satisfy the other process that you have no control over.
EDIT
The problem here is that the OS providing the illusion is not very good. When you allocate a large amount of virtual memory with malloc() or calloc(); the OS should be able to use a tiny piece of real memory to lie to you and avoid consuming a large amount of real memory. Specifically (for most modern operating systems running on normal hardware); the OS should be able to fill a huge area of virtual memory with a single page full of zeros that is mapped many times (at many virtual addresses) as "read only", so that allocating a huge amount of virtual memory costs almost no physical RAM at all (until you write to the virtual memory, causing the OS to allocate the least physical memory needed to satisfy the modifications). Of course if you eventually do write to all of the allocated virtual memory, then you'll end up exhausting physical memory and using some swap space; but this will probably happen gradually and not all at once - many tiny delays scattered over a large period of time are far less likely to be noticed than a single huge delay.
With this in mind; I'd be tempted to try using mmap(..., MAP_ANONYMOUS, ...) instead of the (poorly implemented) malloc() or calloc(). This might mean that you have to deal with the possibility that the allocated virtual memory isn't guaranteed to be initialized to zeros, but (depending on what you're using the memory for) that's likely to be easy to work around.
Expanding on a comment I made to the original question:
If you want to disable swapping, use the swapoff command (sudo swapoff -a). I usually run my machine that way, to avoid it freezing when firefox does something it shouldn't. You can use setrlimit() (or the ulimit command) to set a maximum VM size, but that won't properly compensate for some other process suddenly deciding to be a memory hog (see above).
Even if you choose one of the above options, you should read the rest of this answer to see how to avoid unnecessary initialisation on the first call to calloc().
As for your precise test harness, it turns out that you are triggering an unfortunate exception to GNU calloc()'s optimisation.
Here's a comment (now deleted) I made to another answer, which turns out to not be strictly speaking accurate:
I checked the glibc source for the default gnu/linux malloc library, and verified that calloc() does not normally manually clear memory which has just been mmap'd. And malloc() doesn't touch the memory at all.
It turns out that I missed one exception to the calloc optimisation. Because of the way the GNU malloc implementation initialises the malloc system, the first call to calloc always uses memset() to set the newly-allocated storage to 0. Every other call to calloc() passes through the entire calloc logic, which avoids calling memset() on storage which has been freshly mmap'd.
So the following modification to the test program shows radically different behaviour:
#include <stdio.h>
#include <stdlib.h>
int main ( int argc, char *argv[] ) {
/* These three lines were added */
void* tmp = calloc(1000, 1); /* force initialization */
printf("Allocated 1000 bytes at %p\n", tmp);
free(tmp);
/* The rest is unchanged */
unsigned int nmalloc = (argc>1? atoi(argv[1]) : 10000000 ),
size = (argc>2? atoi(argv[2]) : (0) );
unsigned char *pmalloc = (size>0? calloc(nmalloc,size):malloc(nmalloc));
fprintf( stdout," %s malloc'ed %d elements of %d bytes each.\n",
(pmalloc==NULL? "UNsuccessfully" : "Successfully"),
nmalloc, (size>0?size:1) );
if ( pmalloc != NULL ) free(pmalloc);
}
Note that if you set MALLOC_PERTURB_ to a non-zero value, then it is used to initialise malloc()'d blocks, and forces calloc()'d blocks to be initialised to 0. That's used in the test below.
In the following, I used /usr/bin/time to show the number of page faults during execution. Pay attention to the number of minor faults, which are the result of the operating system zero-initialising a previously unreferenced page in an anonymous mmap'd region (and some other occurrences, like mapping a page already present in Linux's page cache). Also look at the resident set size and, of course, the execution time.
$ gcc -Og -ggdb -Wall -o mall mall.c
$ # A simple malloc completes instantly without page faults
$ /usr/bin/time ./mall 4000000000
Allocated 1000 bytes at 0x55b94ff56260
Successfully malloc'ed -294967296 elements of 1 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1600maxresident)k
0inputs+0outputs (0major+61minor)pagefaults 0swaps
$ # Unless we tell malloc to initialise memory
$ MALLOC_PERTURB_=35 /usr/bin/time ./mall 4000000000
Allocated 1000 bytes at 0x5648c2436260
Successfully malloc'ed -294967296 elements of 1 bytes each.
0.19user 1.23system 0:01.43elapsed 99%CPU (0avgtext+0avgdata 3907584maxresident)k
0inputs+0outputs (0major+976623minor)pagefaults 0swaps
# Same, with calloc. No page faults, instant completion.
$ /usr/bin/time ./mall 1000000000 4
Allocated 1000 bytes at 0x55e8257bb260
Successfully malloc'ed 1000000000 elements of 4 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1656maxresident)k
0inputs+0outputs (0major+62minor)pagefaults 0swaps
$ # Again, setting the magic malloc config variable changes everything
$ MALLOC_PERMUTE_=35 /usr/bin/time ./mall 1000000000 4
Allocated 1000 bytes at 0x5646f391e260
Successfully malloc'ed 1000000000 elements of 4 bytes each.
0.00user 0.00system 0:00.00elapsed 100%CPU (0avgtext+0avgdata 1656maxresident)k
0inputs+0outputs (0major+62minor)pagefaults 0swaps
I've been reading a lot about memory allocation on the heap and how certain heap management allocators do it.
Say I have the following program:
#include<stdlib.h>
#include<stdio.h>
#include<unistd.h>
int main(int argc, char *argv[]) {
// allocate 4 gigabytes of RAM
void *much_mems = malloc(4294967296);
// sleep for 10 minutes
sleep(600);
// free teh ram
free(*much_mems);
// sleep some moar
sleep(600);
return 0;
}
Let's say for sake of argument that my compiler doesn't optimize out anything above, that I can actually allocate 4GiB of RAM, that the malloc() call returns an actual pointer and not NULL, that size_t can hold an integer as big as 4294967296 on my given platform, that the allocater implemented by the malloc call actually does allocate that amount of RAM in the heap. Pretend that the above code does exactly what it looks like it will do.
After the call to free executes, how does the kernel know that those 4 GiB of RAM are now eligible for use for other processes and for the kernel itself? I'm not assuming the kernel is Linux, but that would be a good example. Before the call to free, this process has a heap size of at least 4GiB, and afterward, does it still have that heap size?
How do modern operating systems allow userspace programs to return memory back to kernel space? Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available? And is it possible that my 4 GiB allocation will be non-contiguous?
Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available?
Yes.
A modern implementation of malloc on Linux will call mmap to allocate a large amount of memory. The kernel will find an unused virtual address, mark it as allocated, and return it. (The kernel may also return an error if there isn't enough free memory)
free would then call munmap to deallocate the memory, passing the address and size of the allocation.
On Windows, malloc will call VirtualAlloc and free will call VirtualFree.
On GNU/Linux with Glibc, large memory allocations, of more than a few hundred kilobytes, are handled by calling mmap. When the free function is invoked on this, the library knows that the memory was allocated this way (thanks to meta-data stored in a header). It simply calls unmap on it to release it. That's how the kernel knows; its mmap and unmap API is being used.
You can see these calls if you run strace on the program.
The kernel keeps track of all mmap-ed regions using a red-black tree. Given an arbitrary virtual address, it can quickly determine whether it lands in the mmap area, and which mapping, by performing a tree walk.
Before the call to free, this process has a heap size of at least 4GiB...
The C language does not define either "heap" or "stack". Before the call to free, this process has a chunk of 4 GB dynamically allocated memory...
and afterward, does it still have that heap size?
...and after the free(), access to that memory would be undefined behaviour, so for practical purposes, that dynamically allocated memory is no longer "there".
What the library does "under the hood" (e.g. caching, see below) is up to the library, and is subject to change without further notice. This could change with the amount of available physical memory, system load, runtime parameters, ...
How do modern operating systems allow userspace programs to return memory back to kernel space?
It's up to the standard library's implementation to decide (which, of course, has to talk to the operating system to actually, physically allocate / free memory).
Others have pointed out how certain, existing implementations do it. Other libraries, operating systems, and environments exist.
Do free implementations execute a syscall to the kernel (or many syscalls) to tell it which areas of memory are now available?
Possibly. A common optimization done by library implementations is to "cache" free()d memory, so subsequent malloc() calls can be served without talking to the kernel (which is a costly operation). When, how much, and how long memory is cached this way is, you guessed it, implementation-defined.
And is it possible that my 4 GiB allocation will be non-contiguous?
The process will always "see" contiguous memory. In a system supporting virtual memory (i.e. "modern" desktop OS's like Linux or Windows), the physical memory might be non-contiguous, but the virtual addresses your process gets to see will be contiguous (or the malloc() would have failed if this requirement could not be serviced).
Again, other systems exist. You might be looking at a system that doesn't virtualize addresses (i.e. gives physical addresses to the process). You might be looking at a system that assigns a given amount of memory to a process on startup, serves any malloc() requests from that, and doesn't support the allocation of additional memory. And so on.
If we're using Linux as an example it uses mmap to allocate large chunks of memory. This means when you free it it gets umapped ie the kernel gets told that it can now unmap this memory. Read up on the brk and sbrk system calls. A good place to start would be here...
What does brk( ) system call do?
and here. The following post discusses how malloc is implemented which will give you a good idea what's happening under the covers...
How is malloc() implemented internally?
Doug Lea's malloc can be found here. It's well commented and public domain...
ftp://g.oswego.edu/pub/misc/malloc.c
malloc() and free() are kernel functions (system calls) . it is being called by the application to allocate and free memory on the heap.
application itself is not allocating/freeing memory .
the whole mechanism is executed at kernel level .
see the below heap implementation code
void *heap_alloc(uint32_t nbytes) {
heap_header *p, *prev_p; // used to keep track of the current unit
unsigned int nunits; // this is the number of "allocation units" needed by nbytes of memory
nunits = (nbytes + sizeof(heap_header) - 1) / sizeof(heap_header) + 1; // see how much we will need to allocate for this call
// check to see if the list has been created yet; start it if not
if ((prev_p = _heap_free) == NULL) {
_heap_base.s.next = _heap_free = prev_p = &_heap_base; // point at the base of the memory
_heap_base.s.alloc_sz = 0; // and set it's allocation size to zero
}
// now enter a for loop to find a block fo memory
for (p = prev_p->s.next;; prev_p = p, p = p->s.next) {
// did we find a big enough block?
if (p->s.alloc_sz >= nunits) {
// the block is exact length
if (p->s.alloc_sz == nunits)
prev_p->s.next = p->s.next;
// the block needs to be cut
else {
p->s.alloc_sz -= nunits;
p += p->s.alloc_sz;
p->s.alloc_sz = nunits;
}
_heap_free = prev_p;
return (void *)(p + 1);
}
// not enough space!! Try to get more from the kernel
if (p == _heap_free) {
// if the kernel has no more memory, return error!
if ((p = morecore()) == NULL)
return NULL;
}
}
}
this heap_alloc function uses morecore function which is implemented as below :
heap_header *morecore() {
char *cp;
heap_header *up;
cp = (char *)pmmngr_alloc_block(); // allocate more memory for the heap
// if cp is null we have no memory left
if (cp == NULL)
return NULL;
//vmmngr_mapPhysicalAddress(cp, (void *)_virt_addr); // and map it's virtual address to it's physical address
vmmngr_mapPhysicalAddress(vmmngr_get_directory(), _virt_addr, (uint32_t)cp, I86_PTE_PRESENT | I86_PTE_WRITABLE);
_virt_addr += BLOCK_SIZE; // tack on nu bytes to the virtual address; this will be our next allocation address
up = (heap_header *)cp;
up->s.alloc_sz = BLOCK_SIZE;
heap_free((void *)(up + 1));
return _heap_free;
}
as you can see this function is asking the physical memory manager to allocate a block :
cp = (char *)pmmngr_alloc_block();
and then map the allocated block into virtual memory :
vmmngr_mapPhysicalAddress(vmmngr_get_directory(), _virt_addr, (uint32_t)cp, I86_PTE_PRESENT | I86_PTE_WRITABLE);
as you can see , the whole story is being controlled by the heap manager in kernel level.
The malloc function uses both sbrk and mmap functions. Now the sbrk function increases or decreases the data segment. So it grows linearly. Now my question is, is this linearity always maintained, or for example, an mmap call can allocate memory overlapping the data segment?
I'm talking about multithreaded programs running on multicore systems. This blog talks about some serious flaws of sbrk for multithreaded programs, and it points out that it is possible that memory allocated with sbrk can be intermingled with memory alloacted with mmap (The sbrk heap could become discontinuous because a mmaped region or a shared object obstructs the growth of the heap).
That blog post doesn't see the forest for the trees; only the malloc implementation is allowed to call sbrk with a nonzero argument. More precisely, most malloc implementations for Unix will stop functioning correctly (and by that I mean "your program will crash") if application code calls sbrk with a nonzero argument. If you want to make a large allocation directly from the OS you must use mmap to do it.
(It is true that in a multi-threaded program, malloc must internally wrap a mutex around its calls to sbrk, but that's an implementation detail. POSIX says malloc is thread safe, that's the important thing for an application programmer.)
mmap will not allocate memory overlapping the brk area unless you use MAP_FIXED. If you use MAP_FIXED and your program blows up you get to keep all the pieces.
The kernel tries to avoid doing it, but mmap in normal operation could conceivably allocate memory close to the top of the brk area. If this happens, a subsequent sbrk call that would collide with the mmap region will fail. It will not allocate discontiguous memory. Good implementations of malloc ought to detect this condition and start using mmap for everything. I have not actually tried it, but a test program would be pretty easy to write.
is this linearity always maintained, or for example, an mmap call can allocate memory overlapping the data segment?
Observed behavior is that the brk area is always linear. Implementation details: If enlarging the brk area is not possible, for example due to a blocking mapping, glibc will switch to mmap-only. Small allocations (<128KB) seem to be obtained by glibc via brk if possible, so blocking that with:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
int main(void)
{
int i;
for (i = 0; i < 1024; ++i) {
malloc(2048);
if (i == 512) {
void *r, *end = sbrk(0);
r = mmap(end, 4096, PROT_NONE,
MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
}
}
}
when straced, yields indeed
[...]
brk(0x1e7d000) = 0x1e7d000
mmap(0x1e7d000, 4096, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, 0, 0) = 0x1e7d000
brk(0x1e9e000) = 0x1e7d000 <-- (!)
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fbfd9bc9000