pthread_create fails with ENOMEM on low free memory scenario - c

I have a SH4 board, here are the specs...
uname -a
Linux LINUX7109 2.6.23.17_stm23_A18B-HMP_7109-STSDK #1 PREEMPT Fri Aug 6 16:08:19 ART 2010
sh4 unknown
and suppose I have eaten pretty much all the memory, and have only 9 MB left.
free
total used free shared buffers cached
Mem: 48072 42276 5796 0 172 3264
-/+ buffers/cache: 38840 9232
Swap: 0 0 0
Now, when I try to launch a single thread with default stack size (8 MB)
the pthread_create fails with ENOMEM. If I strace my test code, I can see that the function that is failing is mmap:
old_mmap(NULL, 8388608, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
However, when I set the stack size to a lower value using ulimit -s:
ulimit -s 7500
I can now launch 10 threads. Each thread does not allocate anything, so it
is only consuming the minimum overhead (aprox. 8 kb per thread, right?).
So, my question is:
Knowing that mmap doesnt actually consume the memory,
Why is pthread_create() (or mmap) failing when memory available is below
the thread stack size ?

The VM setting /proc/sys/vm/overcommit_memory (aka. sysctl vm.overcommit_memory) controls whether Linux is willing to hand out more address space than the combined RAM+swap of the machine. (Of course, if you actually try to access that much memory, something will crash. Try a search on "linux oom-killer"...)
The default for this setting is 0. I am going to speculate that someone set it to something else on your system.

Under glibc, the default stack size for threads is 2-10 megabytes (often 8). You should use pthread_attr_setstacksize and call pthread_create with the resulting attributes object to request a thread with a smaller stack.

mmap consume address space.
Pointers have to uniquely identify a piece of "memory" (including mmap file) in memory.
32-bit pointer can only address 2/3GB memory (32bit = 2^32 = 4GB. But some address space is reserved by kernel). This address space is limited.
All threads in the process share the same address space, but different process have separate address spaces.

This is the operating system's only chance to fail the operation gracefully. If the implementation allows this operation to succeed, it could run out of memory during an operation that it cannot return a failure code for, such as the stack growing. The operating system prefers to let an operation fail gracefully than risk having to kill a completely innocent process.

Related

How to use malloc() allocates memory more than RAM in redhat?

System information: Linux version 2.6.32-573.12.1.el6.x86_64 (mockbuild#x86-031.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Mon Nov 23 12:55:32 EST 2015
RAM 48 GB
Problem: I want to malloc() 100 GB memory. But it fail to allocate on redhat system.
I find that 100GB can be allocated in macOS with 8 GB RAM (clang compile). I am very confuse about that.
Maybe lazy allocation described in this link? Why malloc() doesn't stop on OS X?
But why linux system can not? I try ubuntu and redhat, both fail to to that.
Result:
After investigation, I find that the following two steps will let malloc() unlimited:
echo "1" > /proc/sys/vm/overcommit_memory
ulimit -v unlimited
Possible reasons:
The system is out of physical RAM or swap space.
In 32 bit mode, the process size limit was hit.
Possible solutions:
Reduce memory load on the system.
Increase physical memory or swap space.
Check if swap backing store is full.
I think these point can help you to understand malloc failure issue.
Input validation. For example, you've asked for many gigabytes of memory in a single allocation. The exact limit (if any) differs by malloc implementation; POSIX says nothing about a maximum value, short of its type being a size_t.
Allocation failures. With a data segment-based malloc, this means brk failed. This occurs when the new data segment is invalid, the system is low on memory, or the process has exceeded its maximum data segment size (as specified by RLIMIT_DATA). With a mapping-based malloc, this means mmap failed. This occurs when the process is out of virtual memory (specified by RLIMIT_AS), has exceeded its limit on mappings, or has requested too large a mapping. Note most modern Unix systems use a combination of brk and mmap to implement malloc, choosing between the two based on the size of the requested allocation.

Logic to find heap memory space [duplicate]

I was writing some code and it kept crashing. Later after digging the dumps I realized I was overshooting the maximum heap limit (life would have been easier if I had added a check on malloc). Although I fixed that, is there any way to increase my heap size?
PS: A quite similar question here but the reply is unclear to me.
Heap and memory management is a facility provided by your C library (likely glibc). It maintains the heap and returns chunks of memory to you every time you do a malloc(). It doesn't know heap size limit: every time you request more memory than what is available on the heap, it just goes and asks the kernel for more (either using sbrk() or mmap()).
By default, kernel will almost always give you more memory when asked. This means that malloc() will always return a valid address. It's only when you refer to an allocated page for the first time that the kernel will actually bother to find a page for you. If it finds that it cannot hand you one it runs an OOM killer which according to certain measure called badness (which includes your process's and its children's virtual memory sizes, nice level, overall running time etc) selects a victim and sends it a SIGTERM. This memory management technique is called overcommit and is used by the kernel when /proc/sys/vm/overcommit_memory is 0 or 1. See overcommit-accounting in kernel documentation for details.
By writing 2 into /proc/sys/vm/overcommit_memory you can disable the overcommit. If you do that the kernel will actually check whether it has memory before promising it. This will result in malloc() returning NULL if no more memory is available.
You can also set a limit on the virtual memory a process can allocate with setrlimit() and RLIMIT_AS or with the ulimit -v command. Regardless of the overcommit setting described above, if the process tries to allocate more memory than the limit, kernel will refuse it and malloc() will return NULL. Note than in modern Linux kernel (including entire 2.6.x series) the limit on the resident size (setrlimit() with RLIMIT_RSS or ulimit -m command) is ineffective.
The session below was run on kernel 2.6.32 with 4GB RAM and 8GB swap.
$ cat bigmem.c
#include <stdlib.h>
#include <stdio.h>
int main() {
int i = 0;
for (; i < 13*1024; i++) {
void* p = malloc(1024*1024);
if (p == NULL) {
fprintf(stderr, "malloc() returned NULL on %dth request\n", i);
return 1;
}
}
printf("Allocated it all\n");
return 0;
}
$ cc -o bigmem bigmem.c
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
2
$ ./bigmem
malloc() returned NULL on 8519th request
$ sudo bash -c "echo 0 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ ulimit -v $(( 1024*1024 ))
$ ./bigmem
malloc() returned NULL on 1026th request
$
In the example above swapping or OOM kill could never occur, but this would change significantly if the process actually tried to touch all the memory allocated.
To answer your question directly: unless you have virtual memory limit explicitly set with ulimit -v command, there is no heap size limit other than machine's physical resources or logical limit of your address space (relevant in 32-bit systems). Your glibc will keep allocating memory on the heap and will request more and more from the kernel as your heap grows. Eventually you may end up swapping badly if all physical memory is exhausted. Once the swap space is exhausted a random process will be killed by kernel's OOM killer.
Note however, that memory allocation may fail for many more reasons than lack of free memory, fragmentation or reaching a configured limit. The sbrk() and mmap() calls used by glib's allocator have their own failures, e.g. the program break reached another, already allocated address (e.g. shared memory or a page previously mapped with mmap()) or process's maximum number of memory mappings has been exceeded.
The heap usually is as large as the addressable virtual memory on your architecture.
You should check your systems current limits with the ulimit -a command and seek this line max memory size (kbytes, -m) 3008828, this line on my OpenSuse 11.4 x86_64 with ~3.5 GiB of ram says I have roughly 3GB of ram per process.
Then you can truly test your system using this simple program to check max usable memory per process:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc,char* argv[]){
size_t oneHundredMiB=100*1048576;
size_t maxMemMiB=0;
void *memPointer = NULL;
do{
if(memPointer != NULL){
printf("Max Tested Memory = %zi\n",maxMemMiB);
memset(memPointer,0,maxMemMiB);
free(memPointer);
}
maxMemMiB+=oneHundredMiB;
memPointer=malloc(maxMemMiB);
}while(memPointer != NULL);
printf("Max Usable Memory aprox = %zi\n",maxMemMiB-oneHundredMiB);
return 0;
}
This programs gets memory on 100MiB increments, presents the currently allocated memory, allocates 0's on it,then frees the memory. When the system can't give more memory, returns NULL and it displays the final max usable amount of ram.
The Caveat is that your system will start to heavily swap memory in the final stages. Depending on your system configuration, the kernel might decide to kill some processes. I use a 100 MiB increments so there is some breathing space for some apps and the system. You should close anything that you don't want crashing.
That being said. In my system where I'm writing this nothing crashed. And the program above reports barely the same as ulimit -a. The difference is that it actually tested the memory and by means of memset() confirmed the memory was given and used.
For comparison on a Ubuntu 10.04x86 VM with 256 MiB of ram and 400MiB of swap the ulimit report was memory size (kbytes, -m) unlimited and my little program reported 524.288.000 bytes, which is roughly the combined ram and swap, discounting ram used by others software and the kernel.
Edit: As Adam Zalcman wrote, ulimit -m is no longer honored on newer 2.6 and up linux kernels, so i stand corrected. But ulimit -v is honored. For practical results you should replace -m with -v, and look for virtual memory (kbytes, -v) 4515440. It seems mere chance that my suse box had the -m value coinciding with what my little utility reported. You should remember that this is virtual memory assigned by the kernel, if physical ram is insufficient it will take swap space to make up for it.
If you want to know how much physical ram is available without disturbing any process or the system, you can use
long total_available_ram =sysconf(_SC_AVPHYS_PAGES) * sysconf(_SC_PAGESIZE) ;
this will exclude cache and buffer memory, so this number can be far smaller than the actual available memory. OS caches can be quiet large and their eviction can give the needed extra memory, but that is handled by the kernel.
I think your original problem was that malloc failed to allocate the requested memory on your system.
Why this happened is specific to your system.
When a process is loaded, it is allocated memory up to a certain address which is the system break point for the process. Beyond that address the memory is unmapped for the process. So when the process "hits" the "break" point it requests more memory from the system and one way to do this is via the system call sbrk
malloc would do that under the hood but in your system for some reason it failed.
There could be many reasons for this for example:
1) I think in Linux there is a limit for max memory size. I think it is ulimit and perhaps you hit that. Check if it is set to a limit
2) Perhaps your system was too loaded
3) Your program does bad memory management and you end up with fragemented memory so malloc can not get the chunk size you requested.
4) Your program corrupts the malloc internal data structures i.e. bad pointer usage
etc
I'd like to add one point to the previous answers.
Apps have the illusion that malloc() returns 'solid' blocks; in reality, a buffer may exist scattered, pulverized, on many pages of RAM. The crucial fact here is this: the Virtual Memory of a process, containing its code or containing something as a large array, must be contiguous. Let's even admit that code and data be separated; a large array, char str[universe_size], must be contiguous.
Now: can a single app enlarge the heap arbitrarily, to alloc such an array?
The answer could be 'yes' if there were nothing else running in the machine. The heap can be ridiculously huge, but it must have boundaries. At some point, calls to sbrk() (in Linux, the function that, in short, 'enlarges' the heap) should stumble on the area reserved for another application.
This link provides some interesting and clarifying examples, check it out. I did not find the info on Linux.
You can find the process id of your webapp/java process from top.
Use jmap heap - to get the heap allocation. I tested this on AWS-Ec2 for elastic beanstalk and it gives the heap allocated. Here is the detailed answer
Xmx settings in elasticbean stalk through environment properties

How to check heap size for a process on Linux

I was writing some code and it kept crashing. Later after digging the dumps I realized I was overshooting the maximum heap limit (life would have been easier if I had added a check on malloc). Although I fixed that, is there any way to increase my heap size?
PS: A quite similar question here but the reply is unclear to me.
Heap and memory management is a facility provided by your C library (likely glibc). It maintains the heap and returns chunks of memory to you every time you do a malloc(). It doesn't know heap size limit: every time you request more memory than what is available on the heap, it just goes and asks the kernel for more (either using sbrk() or mmap()).
By default, kernel will almost always give you more memory when asked. This means that malloc() will always return a valid address. It's only when you refer to an allocated page for the first time that the kernel will actually bother to find a page for you. If it finds that it cannot hand you one it runs an OOM killer which according to certain measure called badness (which includes your process's and its children's virtual memory sizes, nice level, overall running time etc) selects a victim and sends it a SIGTERM. This memory management technique is called overcommit and is used by the kernel when /proc/sys/vm/overcommit_memory is 0 or 1. See overcommit-accounting in kernel documentation for details.
By writing 2 into /proc/sys/vm/overcommit_memory you can disable the overcommit. If you do that the kernel will actually check whether it has memory before promising it. This will result in malloc() returning NULL if no more memory is available.
You can also set a limit on the virtual memory a process can allocate with setrlimit() and RLIMIT_AS or with the ulimit -v command. Regardless of the overcommit setting described above, if the process tries to allocate more memory than the limit, kernel will refuse it and malloc() will return NULL. Note than in modern Linux kernel (including entire 2.6.x series) the limit on the resident size (setrlimit() with RLIMIT_RSS or ulimit -m command) is ineffective.
The session below was run on kernel 2.6.32 with 4GB RAM and 8GB swap.
$ cat bigmem.c
#include <stdlib.h>
#include <stdio.h>
int main() {
int i = 0;
for (; i < 13*1024; i++) {
void* p = malloc(1024*1024);
if (p == NULL) {
fprintf(stderr, "malloc() returned NULL on %dth request\n", i);
return 1;
}
}
printf("Allocated it all\n");
return 0;
}
$ cc -o bigmem bigmem.c
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
2
$ ./bigmem
malloc() returned NULL on 8519th request
$ sudo bash -c "echo 0 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ ulimit -v $(( 1024*1024 ))
$ ./bigmem
malloc() returned NULL on 1026th request
$
In the example above swapping or OOM kill could never occur, but this would change significantly if the process actually tried to touch all the memory allocated.
To answer your question directly: unless you have virtual memory limit explicitly set with ulimit -v command, there is no heap size limit other than machine's physical resources or logical limit of your address space (relevant in 32-bit systems). Your glibc will keep allocating memory on the heap and will request more and more from the kernel as your heap grows. Eventually you may end up swapping badly if all physical memory is exhausted. Once the swap space is exhausted a random process will be killed by kernel's OOM killer.
Note however, that memory allocation may fail for many more reasons than lack of free memory, fragmentation or reaching a configured limit. The sbrk() and mmap() calls used by glib's allocator have their own failures, e.g. the program break reached another, already allocated address (e.g. shared memory or a page previously mapped with mmap()) or process's maximum number of memory mappings has been exceeded.
The heap usually is as large as the addressable virtual memory on your architecture.
You should check your systems current limits with the ulimit -a command and seek this line max memory size (kbytes, -m) 3008828, this line on my OpenSuse 11.4 x86_64 with ~3.5 GiB of ram says I have roughly 3GB of ram per process.
Then you can truly test your system using this simple program to check max usable memory per process:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc,char* argv[]){
size_t oneHundredMiB=100*1048576;
size_t maxMemMiB=0;
void *memPointer = NULL;
do{
if(memPointer != NULL){
printf("Max Tested Memory = %zi\n",maxMemMiB);
memset(memPointer,0,maxMemMiB);
free(memPointer);
}
maxMemMiB+=oneHundredMiB;
memPointer=malloc(maxMemMiB);
}while(memPointer != NULL);
printf("Max Usable Memory aprox = %zi\n",maxMemMiB-oneHundredMiB);
return 0;
}
This programs gets memory on 100MiB increments, presents the currently allocated memory, allocates 0's on it,then frees the memory. When the system can't give more memory, returns NULL and it displays the final max usable amount of ram.
The Caveat is that your system will start to heavily swap memory in the final stages. Depending on your system configuration, the kernel might decide to kill some processes. I use a 100 MiB increments so there is some breathing space for some apps and the system. You should close anything that you don't want crashing.
That being said. In my system where I'm writing this nothing crashed. And the program above reports barely the same as ulimit -a. The difference is that it actually tested the memory and by means of memset() confirmed the memory was given and used.
For comparison on a Ubuntu 10.04x86 VM with 256 MiB of ram and 400MiB of swap the ulimit report was memory size (kbytes, -m) unlimited and my little program reported 524.288.000 bytes, which is roughly the combined ram and swap, discounting ram used by others software and the kernel.
Edit: As Adam Zalcman wrote, ulimit -m is no longer honored on newer 2.6 and up linux kernels, so i stand corrected. But ulimit -v is honored. For practical results you should replace -m with -v, and look for virtual memory (kbytes, -v) 4515440. It seems mere chance that my suse box had the -m value coinciding with what my little utility reported. You should remember that this is virtual memory assigned by the kernel, if physical ram is insufficient it will take swap space to make up for it.
If you want to know how much physical ram is available without disturbing any process or the system, you can use
long total_available_ram =sysconf(_SC_AVPHYS_PAGES) * sysconf(_SC_PAGESIZE) ;
this will exclude cache and buffer memory, so this number can be far smaller than the actual available memory. OS caches can be quiet large and their eviction can give the needed extra memory, but that is handled by the kernel.
I think your original problem was that malloc failed to allocate the requested memory on your system.
Why this happened is specific to your system.
When a process is loaded, it is allocated memory up to a certain address which is the system break point for the process. Beyond that address the memory is unmapped for the process. So when the process "hits" the "break" point it requests more memory from the system and one way to do this is via the system call sbrk
malloc would do that under the hood but in your system for some reason it failed.
There could be many reasons for this for example:
1) I think in Linux there is a limit for max memory size. I think it is ulimit and perhaps you hit that. Check if it is set to a limit
2) Perhaps your system was too loaded
3) Your program does bad memory management and you end up with fragemented memory so malloc can not get the chunk size you requested.
4) Your program corrupts the malloc internal data structures i.e. bad pointer usage
etc
I'd like to add one point to the previous answers.
Apps have the illusion that malloc() returns 'solid' blocks; in reality, a buffer may exist scattered, pulverized, on many pages of RAM. The crucial fact here is this: the Virtual Memory of a process, containing its code or containing something as a large array, must be contiguous. Let's even admit that code and data be separated; a large array, char str[universe_size], must be contiguous.
Now: can a single app enlarge the heap arbitrarily, to alloc such an array?
The answer could be 'yes' if there were nothing else running in the machine. The heap can be ridiculously huge, but it must have boundaries. At some point, calls to sbrk() (in Linux, the function that, in short, 'enlarges' the heap) should stumble on the area reserved for another application.
This link provides some interesting and clarifying examples, check it out. I did not find the info on Linux.
You can find the process id of your webapp/java process from top.
Use jmap heap - to get the heap allocation. I tested this on AWS-Ec2 for elastic beanstalk and it gives the heap allocated. Here is the detailed answer
Xmx settings in elasticbean stalk through environment properties

Address Space Layout for a Multithreaded Linux Process

I want to know the full detail of the address space layout of a multithreaded Linux Process for both 64 bit and 32 bit. Link to any article that describes it will be appreciated. And note that I need to know full details, not just an overview, because I will be directly dealing with it. So I need to know for example, where are the thread stacks located, the heap, thread private data etc...
Thread stacks are allocated with mmap at thread start (or even before - you can set the stack space in pthread_attrs). TLS data is stored in the beginning of thread's stack. Size of thread's stacks is fixed, typically it is from 2 to 8 MB. Stack size of each thread can't be changed while the thread is live. (First thread - running main - is still uses main stack at the end of address space and this stack may grow and shrink.) Heap and code is shared between all threads. Mutexes can be anywhere in data section - it is just a struct.
The mmap of thread's stack is not fixed at any address:
Glibc sources
mem = mmap (NULL, size, prot,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
PS modern GCC allows threads stack to be unlimited with SplitStacks feature

Maximum memory which malloc can allocate

I was trying to figure out how much memory I can malloc to maximum extent on my machine
(1 Gb RAM 160 Gb HD Windows platform).
I read that the maximum memory malloc can allocate is limited to physical memory (on heap).
Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require.
So to confirm, I wrote a small program in C:
int main(){
int *p;
while(1){
p=(int *)malloc(4);
if(!p)break;
}
}
I was hoping that there would be a time when memory allocation would fail and the loop would break, but my computer hung as it was an infinite loop.
I waited for about an hour and finally I had to force shut down my computer.
Some questions:
Does malloc allocate memory from HD also?
What was the reason for above behaviour?
Why didn't loop break at any point of time?
Why wasn't there any allocation failure?
I read that the maximum memory malloc can allocate is limited to physical memory (on heap).
Wrong: most computers/OSs support virtual memory, backed by disk space.
Some questions: does malloc allocate memory from HDD also?
malloc asks the OS, which in turn may well use some disk space.
What was the reason for above behavior? Why didn't the loop break at any time?
Why wasn't there any allocation failure?
You just asked for too little at a time: the loop would have broken eventually (well after your machine slowed to a crawl due to the large excess of virtual vs physical memory and the consequent super-frequent disk access, an issue known as "thrashing") but it exhausted your patience well before then. Try getting e.g. a megabyte at a time instead.
When a program exceeds consumption of memory to a certain level, the
computer stops working because other applications do not get enough
memory that they require.
A total stop is unlikely, but when an operation that normally would take a few microseconds ends up taking (e.g.) tens of milliseconds, those four orders of magnitude may certainly make it feel as if the computer had basically stopped, and what would normally take a minute could take a week.
I know this thread is old, but for anyone willing to give it a try oneself, use this code snipped
#include <stdlib.h>
int main() {
int *p;
while(1) {
int inc=1024*1024*sizeof(char);
p=(int*) calloc(1,inc);
if(!p) break;
}
}
run
$ gcc memtest.c
$ ./a.out
upon running, this code fills up ones RAM until killed by the kernel. Using calloc instead of malloc to prevent "lazy evaluation". Ideas taken from this thread:
Malloc Memory Questions
This code quickly filled my RAM (4Gb) and then in about 2 minutes my 20Gb swap partition before it died. 64bit Linux of course.
/proc/sys/vm/overcommit_memory controls the maximum on Linux
On Ubuntu 19.04 for example, we can easily see that malloc is implemented with mmap(MAP_ANONYMOUS by using strace.
Then man proc then describes how /proc/sys/vm/overcommit_memory controls the maximum allocation:
This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit
In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed".
In mode 1, the kernel pretends there is always enough memory, until memory actually runs out. One use case for this mode is scientific computing applications that em‐ ploy large sparse arrays. In Linux kernel versions before 2.6.0, any nonzero value implies mode 1.
In mode 2 (available since Linux 2.6), the total virtual address space that can be allocated (CommitLimit in /proc/meminfo) is calculated as
CommitLimit = (total_RAM - total_huge_TLB) * overcommit_ratio / 100 + total_swap
where:
total_RAM is the total amount of RAM on the system;
total_huge_TLB is the amount of memory set aside for huge pages;
overcommit_ratio is the value in /proc/sys/vm/overcommit_ratio; and
total_swap is the amount of swap space.
For example, on a system with 16GB of physical RAM, 16GB of swap, no space dedicated to huge pages, and an overcommit_ratio of 50, this formula yields a Com‐ mitLimit of 24GB.
Since Linux 3.14, if the value in /proc/sys/vm/overcommit_kbytes is nonzero, then CommitLimit is instead calculated as:
CommitLimit = overcommit_kbytes + total_swap
See also the description of /proc/sys/vm/admiin_reserve_kbytes and /proc/sys/vm/user_reserve_kbytes.
Documentation/vm/overcommit-accounting.rst in the 5.2.1 kernel tree also gives some information, although lol a bit less:
The Linux kernel supports the following overcommit handling modes
0 Heuristic overcommit handling. Obvious overcommits of address
space are refused. Used for a typical system. It ensures a
seriously wild allocation fails while allowing overcommit to
reduce swap usage. root is allowed to allocate slightly more
memory in this mode. This is the default.
1 Always overcommit. Appropriate for some scientific
applications. Classic example is code using sparse arrays and
just relying on the virtual memory consisting almost entirely
of zero pages.
2 Don't overcommit. The total address space commit for the
system is not permitted to exceed swap + a configurable amount
(default is 50%) of physical RAM. Depending on the amount you
use, in most situations this means a process will not be
killed while accessing pages but will receive errors on memory
allocation as appropriate.
Useful for applications that want to guarantee their memory
allocations will be available in the future without having to
initialize every page.
Minimal experiment
We can easily see the maximum allowed value with:
main.c
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char **argv) {
char *chars;
size_t nbytes;
/* Decide how many ints to allocate. */
if (argc < 2) {
nbytes = 2;
} else {
nbytes = strtoull(argv[1], NULL, 0);
}
/* Allocate the bytes. */
chars = mmap(
NULL,
nbytes,
PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_ANONYMOUS,
-1,
0
);
/* This can happen for example if we ask for too much memory. */
if (chars == MAP_FAILED) {
perror("mmap");
exit(EXIT_FAILURE);
}
/* Free the allocated memory. */
munmap(chars, nbytes);
return EXIT_SUCCESS;
}
GitHub upstream.
Compile and run to allocate 1GiB and 1TiB:
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c
./main.out 0x40000000
./main.out 0x10000000000
We can then play around with the allocation value to see what the system allows.
I can't find a precise documentation for 0 (the default), but on my 32GiB RAM machine it does not allow the 1TiB allocation:
mmap: Cannot allocate memory
If I enable unlimited overcommit however:
echo 1 | sudo tee /proc/sys/vm/overcommit_memory
then the 1TiB allocation works fine.
Mode 2 is well documented, but I'm lazy to carry out precise calculations to verify it. But I will just point out that in practice we are allowed to allocate about:
overcommit_ratio / 100
of total RAM, and overcommit_ratio is 50 by default, so we can allocate about half of total RAM.
VSZ vs RSS and the out-of-memory killer
So far, we have just allocated virtual memory.
However, at some point of course, if you use enough of those pages, Linux will have to start killing some processes.
I have illustrated that in detail at: What is RSS and VSZ in Linux memory management
Try this
#include <stdlib.h>
#include <stdio.h>
main() {
int Mb = 0;
while (malloc(1<<20)) ++Mb;
printf("Allocated %d Mb total\n", Mb);
}
Include stdlib and stdio for it.
This extract is taken from deep c secrets.
malloc does its own memory management, managing small memory blocks itself, but ultimately it uses the Win32 Heap functions to allocate memory. You can think of malloc as a "memory reseller".
The windows memory subsystem comprises physical memory (RAM) and virtual memory (HD). When physical memory becomes scarce, some of the pages can be copied from physical memory to virtual memory on the hard drive. Windows does this transparently.
By default, Virtual Memory is enabled and will consume the available space on the HD. So, your test will continue running until it has either allocated the full amount of virtual memory for the process (2GB on 32-bit windows) or filled the hard disk.
As per C90 standard guarantees that you can get at least one object 32 kBytes in size, and this may be static, dynamic, or automatic memory. C99 guarantees at least 64 kBytes. For any higher limit, refer your compiler's documentation.
Also, malloc's argument is a size_t and the range of that type is [0,SIZE_MAX], so the maximum you can request is SIZE_MAX, which value varies upon implementation and is defined in <limits.h>.
I don't actually know why that failed, but one thing to note is that `malloc(4)" may not actually give you 4 bytes, so this technique is not really an accurate way to find your maximum heap size.
I found this out from my question here.
For instance, when you declare 4 bytes of memory, the space directly before your memory could contain the integer 4, as an indication to the kernel of how much memory you asked for.
Does malloc allocate memory from HD also?
Implementation of malloc() depends on libc implementation and operating system (OS). Typically malloc() doesn't always request RAM from the OS but returns a pointer to previously allocated memory block "owned" by libc.
In case of POSIX compatible systems, this libc controlled memory area is usually increased using syscall brk(). That doesn't allow releasing any memory between two still existing allocations which causes the process to look still using all the RAM after allocating areas A, B, C in sequence and releasing B. This is because areas A and C around the area B are still in use so the memory allocated from the OS cannot be returned.
Many modern malloc() implementations have some kind of heuristic where small allocations use the memory area reserved via brk() and "big" allocations use anonymous virtual memory blocks reserved via mmap() using MAP_ANONYMOUS flag. This allows immediately returning these big allocations when free() is later called. Typically the runtime performance of mmap() is slightly slower than using previously reserved memory which is the reason malloc() implements this heuristic.
Both brk() and mmap() allocate virtual memory from the OS. And virtual memory can be always backed up by swap which may be stored in any storage that the OS supports, including HDD.
In case you run Windows, the syscalls have different names but the underlying behavior is probably about the same.
What was the reason for above behaviour?
Since your example code never touched the memory, I'd guess you're seeing behavior where OS implements copy-on-write for virtual RAM and the memory is mapped to shared page with whole page filled with zeroes by default. Modern operating systems do this because many programs allocate more RAM than they actually need and using shared zero page by default for all memory allocations avoids needing to use real RAM for these allocations.
If you want to test how OS handles your loop and actually reserve true storage, you need to write something to the memory you allocated. For x86 compatible hardware you only need to write one byte per each 4096 byte segment because page size is 4096 and the hardware cannot implement copy-on-write behavior for smaller segments; once one byte is modified, the whole 4096 byte segment called page must be reserved for your process. I'm not aware of any modern CPU that would support smaller than 4096 byte pages. Modern Intel CPUs support 2 MB and 1 GB pages in addition to 4096 byte pages but the 1 GB pages are rarely used because the overhead of using 2 MB pages is small enough for any sensible RAM amounts. 1 GB pages might make sense if your system has hundreds of terabytes of RAM.
So basically your program only tested reserving virtual memory without ever using said virtual memory. Your OS probably has special optimization for this which avoids needing more than 4 KB of RAM to support this.
Unless your objective is to try to measure the overhead caused by your malloc() implementation, you should avoid trying to allocate memory block smaller than 16-32 bytes. For mmap() allocations the minimum possible overhead is 8 bytes per allocation on x86-64 hardware due the data needed to return the memory to the operating system so it really doesn't make sense for malloc() to use mmap() syscall for a single 4 byte allocation.
The overhead is needed to keep track of memory allocations because the memory is freed using void free(void*) so memory allocation routines must keep track of the allocated memory segment size somewhere. Many malloc() implementations also need additional metadata and if they need to keep track of any memory addresses, those need 8 bytes per address.
If you truly want to search for the limits of your system, you should probably do binary search for the limit where malloc() fails. In practice, you try to allocate ..., 1KB, 2KB, 4KB, 8KB, ..., 32 GB which then fails and you know that the real world limit is between 16 GB and 32 GB. You can then split this size in half and figure out the exact limit with additional testing. If you do this kind of search, it may be easier to always release any successful allocation and reserve the test block with a single malloc() call. That should also avoid accidentally accounting for malloc() overhead so much because you need only one allocation at any time at max.
Update: As pointed out by Peter Cordes in the comments, your malloc() implementation may be writing bookkeeping data about your allocations in the reserved RAM which causes real memory to be used and that can cause system to start swapping so heavily that you cannot recover it in any sensible timescale without shutting down the computer. In case you're running Linux and have enabled "Magic SysRq" keys, you could just press Alt+SysRq+f to kill the offending process taking all the RAM and system would run just fine again. It is possible to write malloc() implementation that doesn't usually touch the RAM allocated via brk() and I assumed you would be using one. (This kind of implementation would allocate memory in 2^n sized segments and all similarly sized segments are reserved in the same range of addresses. When free() is later called, the malloc() implementation knows the size of the allocation from the address and bookkeeping about free memory segments are kept in separate bitmap in single location.) In case of Linux, malloc() implementation touching the reserved pages for internal bookkeeping is called dirtying the memory, which prevents sharing memory pages because of copy-on-write handling.
Why didn't loop break at any point of time?
If your OS implements the special behavior described above and you're running 64-bit system, you're not going to run out of virtual memory in any sensible timescale so your loop seems infinite.
Why wasn't there any allocation failure?
You didn't actually use the memory so you're allocating virtual memory only. You're basically increasing the maximum pointer value allowed for your process but since you never access the memory, the OS never bothers the reserve any physical memory for your process.
In case you're running Linux and want the system to enforce virtual memory usage to match actually available memory, you have to write 2 to kernel setting /proc/sys/vm/overcommit_memory and maybe adjust overcommit_ratio, too. See https://unix.stackexchange.com/q/441364/20336 for details about memory overcommit on Linux. As far as I know, Windows implements overcommit, too, but I don't know how to adjust its behavior.
when first time you allocate any size to *p, every next time you leave that memory to be unreferenced. That means
at a time your program is allocating memory of 4 bytes only
. then how can you thing you have used entire RAM, that's why SWAP device( temporary space on HDD) is out of discussion. I know an memory management algorithm in which when no one program is referencing to memory block, that block is eligible to allocate for programs memory request. That's why you are just keeping busy to RAM Driver and that's why it can't give chance to service other programs. Also this a dangling reference problem.
Ans : You can at most allocate the memory of your RAM size. Because no program has access to swap device.
I hope your all questions has got satisfactory answers.

Resources