Malloc & calloc: different memory size allocated - c

So, I have this piece of code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char *p;
long n = 1;
while(1) {
p = malloc(n * sizeof(char));
//p = calloc(n, sizeof(char));
if(p) {
printf("[%ld] Memory allocation successful! Address: %p\n", n , p);
n++;
} else {
printf("No more memory! Sorry...");
break;
}
}
free(p);
getch();
return 0;
}
And I run in on Windows. Interesting thing:
if we use malloc, the program allocates about 430 MB of memory and then stops (photo here => http://i.imgur.com/woswThG.png)
if we use calloc, the program allocates about 2 GB of memory and then stops (photo here => http://i.imgur.com/3JKy5pA.png)
(strange test): if we use both of them in the same time, it uses maximum of (~400MB + ~2GB) / 2 => ~1.2GB
However, if I run the same code on Linux, the allocation goes on and on (after 600k allocations and many GB used it still continues until eventually it is killed) and approximately the same amount of memory is used.
So my question is: shouldn't they allocate the same amount of memory? I thought the only difference was that calloc initialize the memory with zero (malloc returns uninitialized memory). And why it only happens on Windows? It's strange and interesting in the same time.
Hope you can help me with an explanation for this. Thanks!
Edit:
Code::Blocks 13.12 with GNU GCC Compiler
Windows 10 (x64)
Linux Mint 17.2 "Rafaela" - Cinnamon (64-bit) (for Linux testing)

Looking at the program output, you actually allocate the same number of blocks, 65188 for malloc, 65189 for calloc. Ignoring overhead, that's slightly less than 2GB of memory.
My guess is that you compile in 32 bit mode (pointers are dumped as 32 bits), which limits the amount of memory available to a single user process to less than 2GB. The difference on the process map display comes from how your program uses the memory it allocates.
The malloc version does not touch the allocated pages: more than 3 quarters of them are not actually mapped, hence only 430MB.
The calloc version shows 2GB of mapped memory: chances are your C library function calloc clears the allocated memory, even for pages obtained from the OS. This is not optimal, but only visible if you do not touch the allocated memory, a special case anyway. Yet it would be faster to not clear pages obtained from the OS as they are specified to be zero filled anyway.
In Linux, you may be compiling to 64 bits, getting access to much more than 2GB of virtual process space. Since you do not touch the memory, it is not mapped, and the same seems to happen in the calloc case as well. The C runtime is different (64 bit glibc on Linux, 32 bit Microsoft Library on Windows). You should use top or ps in a different terminal in Linux to check how much memory is actually mapped to your process in both cases.

Related

A technical question about dynamic memory allocation in C

I'm studying dynamic memory allocation in C, and I want to ask a question - let us suppose we have a program that receives a text input from the user. We don't know how long that text will be, it could be short, it could also be extremely long, so we know that we have to allocate memory to store the text in a buffer. In cases in which we receive a very long text, is there a way to find out whether we have enough memory space to allocate more memory to the text? Is there a way to have an indication that there is no memory space left?
You can use malloc() function if it returned NULL that means there no enough mem space but if it returned address of the mem it means there are mem space available example:
void* loc = malloc(sizeof(string));
ANSI C has no standard functions to get the size of available free RAM.
You may use platform-specific solutions.
C - Check currently available free RAM?
In C we typically use malloc, calloc and realloc for allocation of dynamic memory. As it has been pointed out in both answers and comments these functions return a NULL pointer in case of failure. So typical C code would be:
SomeType *p = malloc(size_required);
if (p == NULL)
{
// ups... malloc failed... add error handling
}
else
{
// great... p now points to allocated memory that we can use
}
I like to add that on (at least) Linux systems, the return value from malloc (and friends) is not really an out-of-memory indicator.
If the return value is NULL, we know the call failed and that we didn't get any memory that we can use.
But even if the return value is non-NULL, there is no guarantee that the memory really is available.
From https://man7.org/linux/man-pages/man3/free.3.html :
By default, Linux follows an optimistic memory allocation
strategy. This means that when malloc() returns non-NULL there
is no guarantee that the memory really is available. In case it
turns out that the system is out of memory, one or more processes
will be killed by the OOM killer.
We don't know how long that text will be
Sure we do, we always set a maximum limit. Because all user input needs to be sanitised anyway - so we always require a maximum limit on every single user input. If you don't do this, it likely means that your program is broken since it's vulnerable to buffer overruns.
Typically you'll read each line of user input into a local array allocated on the stack. Then you can check if it is valid (are strings null terminated etc) before allocating dynamic memory and then copy it over there.
By checking the return value of malloc etc you'll see if there was enough memory left or not.
There is no standard library function that tells you how much memory is available for use.
The best you can do within the bounds of the standard library is to attempt the allocation using malloc, calloc, or realloc and check the return value - it it’s NULL, then the allocation operation failed.
There may be system-specific routines that can provide that information, but I don’t know of any off the top of my head.
I made a test on linux with 8GB RAM. The overcommit has three main modes 0, 1 and 2 which are default, unlimited, and never:
Default:
$ echo 0 > /proc/sys/vm/overcommit_memory
$ ./a.out
After loop: Cannot allocate memory
size 17179869184
size 400000000
log2(size)34.000000
This means 8.5 GB were successfuly allocated, just about the amount of physical RAM. I tried to tweak it, but without changing swap, which is only 4 GB.
Unlimited:
$ echo 1 > /proc/sys/vm/overcommit_memory
$ ./a.out
After loop: Cannot allocate memory
size 140737488355328
size 800000000000
log2(size)47.000000
48 bits is virtual address size. 140 TB. Physical is only 39 bits (500 GB).
No overcommmit:
$ echo 2 > /proc/sys/vm/overcommit_memory
$ ./a.out
After loop: Cannot allocate memory
size 2147483648
size 80000000
log2(size)31.000000
2 GB is just what free command declares as free. Available are 4.6 GB.
malloc() fails in the same way if the process's resources are restricted - so this ENOMEM does not really specify much. "Cannot allocate memory" (aka ENOMEM aka 12) just says "malloc failed, guess why" or rather "malloc failed, NO more MEMory for you now.".
Well here is a.out which allocates doubling sizes until error.
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <math.h>
int main() {
size_t sz = 4096;
void *p;
while (!errno) {
p = malloc(sz *= 2);
free(p);
}
perror("After loop");
printf("size %ld\n", sz);
printf("size %lx\n", sz);
printf("log2(size)%f\n", log2((double)sz));
}
But I don't think this kind of probing is very useful/
Buffer
we have to allocate memory to store the text in a buffer
But not the whole text at once. With a real buffer (not just an allocated memory as destination) you could read portions of the input and store them away (out of memory onto disk).
Only disadvantage is: if I cannot use a partial input, then all the buffered copying and saving is wasted.
I really wonder what happens if I type and type fast for a couple of billion years -- without a newline.
We can allocate much more than we have as RAM, but we only need a fraction of that RAM: the buffer. But Lundin's answer shows it is much easier (typical) to rely on newlines and maximum length.
getline(3)
This gnu/posix function has the malloc/realloc built in. The paramters are a bit complicated, because a new pointer and size can be returned by reference. And return value of -1 can also mean ENOMEM, not end-of-file.
fgets() is the line-truncating version.
fread() is newline independant, with fixed size. (But you asked about text input - long lines or long overall text, or both?)
Good Q, good As, good comments about "live input":
getline how to limit amount of input as you can with fgets

Cannot Exhaust Physical Memory in 32 bit Linux

So I've got an interesting OS based problem for you. I've spent the last few hours conversing with anyone I know who's experienced with C programming, and nobody seems to be able to come up with a definitive answer as to why this behaviour is occurring.
I have a program that is intentionally designed to cause an extreme memory leak, (as an example of what happens when you don't free memory after allocating it). On 64 bit operating systems, (Windows, Linux, etc), it does what it should do. It fills physical ram, then fills the swap space of the OS. In Linux, the process is then terminated by the OS. In Windows however, it is not, and it continues running. The eventual result is a system crash.
Here's the code:
#include <stdlib.h>
#include <stdio.h>
void main()
{
while(1)
{
int *a;
a = (int*)calloc(65536, 4);
}
}
However, if you compile and run this code on a 32 bit Linux distribution, it has no effect on physical memory usage at all. It uses approximately 1% of my 4 GB of allocated RAM, and it never rises after that. I don't have a legitimate copy of 32 Bit Windows to test on, so I can't be certain this occurs on 32 bit Windows as well.
Can somebody please explain why the use of calloc will fill the physical ram of a 64 bit Linux OS, but not a 32 bit Linux OS?
The malloc and calloc functions do not technically allocate memory, despite their name. They actually allocate portions of your program's address space with OS-level read/write permissions. This is a subtle difference and is not relevant most of the time.
This program, as written, only consumes address space. Eventually, calloc will start returning NULL but the program will continue running.
#include <stdlib.h>
// Note main should be int.
int main() {
while (1) {
// Note calloc should not be cast.
int *a = calloc(65536, sizeof(int));
}
}
If you write to the addresses returned from calloc, it will force the kernel to allocate memory to back those addresses.
#include <stdlib.h>
#include <string.h>
int main() {
size_t size = 65536 * 4;
while (1) {
// Allocates address space.
void *p = calloc(size, 1);
// Forces the address space to have allocated memory behind it.
memset(p, 0, size);
}
}
It's not enough to write to a single location in the block returned from calloc because the granularity for allocating actual memory is 4 KiB (the page size... 4 KiB is the most common). So you can get by with just writing to each page.
What about the 64-bit case?
There is some bookkeeping overhead for allocating address space. On a 64-bit system, you get something like 40 or 48 bits of address space, of which about half can be allocated to the program, which comes to at least 8 TiB. On a 32-bit system this comes to 2 GiB or so (depending on kernel configuration).
So on a 64-bit system, you can allocate ~8 TiB, and a 32-bit system you can allocate ~2 GiB, and the overhead is what causes the problems. There is typically a small amount of overhead for each call to malloc or calloc.
See also Why malloc+memset is slower than calloc?

malloc() in C And Memory Usage

I was trying an experiment with malloc to see if I could allocate all the memory available.
I used the following simple program and have a few questions:
int main(void)
{
char * ptr;
int x = 100;
while(1)
{
ptr = (char *) malloc(x++ * sizeof(char) / 2);
printf("%p\n",ptr);
}
return 0;
}
1) Why is it that when using larger data types(int, unsigned long long int, long double) the process would use less memory but with smaller data types (int, char) it would use more?
2) When running the program, it would stop allocating memory after it reached a certain amount (~592mb on Windows 7 64-bit with 8GB RAM swap file set to system managed). The output of the print if showed 0 which means NULL. Why does it stop allocating memory after a reaching this threshold and not exhaust the system memory and swap?
I found someone in the following post trying the same thing as me, but the difference they were not seeing any difference in memory usage, but I am.
Memory Leak Using malloc fails
I've tried the code on Linux kernel 2.6.32-5-686 with similar results.
Any help and explanation would be appreciated.
Thanks,
1)Usually memory is allocated in multiples of pages, so if the size you asked for is less than a page malloc will allocate at least one page.
2)This makes sense, because in a multitasking system, you're not the only user and your process is not the only process running, there are many other processes that share a limited set of resources, including memory. If the OS allowed one process to allocate all the memory it needs without any limitation, then it's not really a good OS, right ?
Finally, in Linux, the kernel doesn't allocate any physical memory pages until after you actually start using this memory, so just calling malloc doesn't actually consume any physical memory, other than what is required to keep track of the allocation itself of course. I'm not sure about Windows though.
Edit:
The following example allocates 1GB of virtual memory
#include <stdio.h>
int main(int agrc, char **argv)
{
void *p = malloc(1024*1024*1024);
getc(stdin);
}
If you run top you get
top -p `pgrep test`
PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20 0 1027m 328 252 S 0 0.0 0:00.00 test
If you change malloc to calloc, and run top again you get
top -p `pgrep test`
PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20 0 1027m 1.0g 328 S 0 1.3 0:00.08 test
How are you reading your memory usage?
1) When allocating with char, you're allocating less memory per allocation than you do with for example long (one quarter as much, usually, but it's machine dependent)
Since most memory usage tools external to the program itself don't show allocated memory but actually used memory, it will only show the overhead malloc() itself uses instead of the unused memory you malloc'd.
More allocations, more overhead.
You should get a very different result if you fill the malloc'd block with data for each allocation so the memory is actually used.
2) I assume you're reading that from the same tool? Try counting how many bytes you actually allocate instead and it should be showing the correct amount instead of just "malloc overhead".
1) When you allocate memory, each allocation takes the space of the requested memory plus the size of a heap frame. See a related question here
2) The size of any single malloc is limited in Windows to _HEAP_MAXREQ. See this question for more info and some workarounds.
1) This could come from the fact that memory is paged and that every page has the same size. If your data fails to fit in a page and falls 'in-between' two pages, I think it is move to the beginning of the next page, thus creating a loss of space at the end of previous page.
2) The threshold is smaller because I think every program is restricted to a certain amount of data that is not the total maximum memory you have.

Problem usage memory in C

Please help :)
OS : Linux
Where in " sleep(1000);", at this time "top (display Linux tasks)" wrote me 7.7 %MEM use.
valgrind : not found memory leak.
I understand, wrote correctly and all malloc result is NULL.
But Why in this time "sleep" my program NOT decreased memory ? What missing ?
Sorry for my bad english, Thanks
~ # tmp_soft
For : Is it free?? no
Is it free?? yes
For 0
For : Is it free?? no
Is it free?? yes
For 1
END : Is it free?? yes
END
~ #top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23060 root 20 0 155m 153m 448 S 0 7.7 0:01.07 tmp_soft
Full source : tmp_soft.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
struct cache_db_s
{
int table_update;
struct cache_db_s * p_next;
};
void free_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t;
while (*cache_db != NULL)
{
cache_db_t = *cache_db;
*cache_db = (*cache_db)->p_next;
free(cache_db_t);
cache_db_t = NULL;
}
printf("Is it free?? %s\n",*cache_db==NULL?"yes":"no");
}
void make_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t = NULL;
int n = 10000000;
for (int i=0; i = n; i++)
{
if ((cache_db_t=malloc(sizeof(struct cache_db_s)))==NULL) {
printf("Error : malloc 1 -> cache_db_s (no free memory) \n");
break;
}
memset(cache_db_t, 0, sizeof(struct cache_db_s));
cache_db_t->table_update = 1; // tmp
cache_db_t->p_next = *cache_db;
*cache_db = cache_db_t;
cache_db_t = NULL;
}
}
int main(int argc, char **argv)
{
struct cache_db_s * cache_db = NULL;
for (int ii=0; ii 2; ii++) {
make_cache_db(&cache_db);
printf("For : Is it free?? %s\n",cache_db==NULL?"yes":"no");
free_cache_db(&cache_db);
printf("For %d \n", ii);
}
printf("END : Is it free?? %s\n",cache_db==NULL?"yes":"no");
printf("END \n");
sleep(1000);
return 0;
}
For good reasons, virtually no memory allocator returns blocks to the OS
Memory can only be removed from your program in units of pages, and even that is unlikely to be observed.
calloc(3) and malloc(3) do interact with the kernel to get memory, if necessary. But very, very few implementations of free(3) ever return memory to the kernel1, they just add it to a free list that calloc() and malloc() will consult later in order to reuse the released blocks. There are good reasons for this design approach.
Even if a free() wanted to return memory to the system, it would need at least one contiguous memory page in order to get the kernel to actually protect the region, so releasing a small block would only lead to a protection change if it was the last small block in a page.
Theory of Operation
So malloc(3) gets memory from the kernel when it needs it, ultimately in units of discrete page multiples. These pages are divided or consolidated as the program requires. Malloc and free cooperate to maintain a directory. They coalesce adjacent free blocks when possible in order to be able to provide large blocks. The directory may or may not involve using the memory in freed blocks to form a linked list. (The alternative is a bit more shared-memory and paging-friendly, and it involves allocating memory specifically for the directory.) Malloc and free have little if any ability to enforce access to individual blocks even when special and optional debugging code is compiled into the program.
1. The fact that very few implementations of free() attempt to return memory to the system is not at all due to the implementors slacking off.Interacting with the kernel is much slower than simply executing library code, and the benefit would be small. Most programs have a steady-state or increasing memory footprint, so the time spent analyzing the heap looking for returnable memory would be completely wasted. Other reasons include the fact that internal fragmentation makes page-aligned blocks unlikely to exist, and it's likely that returning a block would fragment blocks to either side. Finally, the few programs that do return large amounts of memory are likely to bypass malloc() and simply allocate and free pages anyway.
If you're trying to establish whether your program has a memory leak, then top isn't the right tool for the job (valrind is).
top shows memory usage as seen by the OS. Even if you call free, there is no guarantee that the freed memory would get returned to the OS. Typically, it wouldn't. Nonetheless, the memory does become "free" in the sense that your process can use it for subsequent allocations.
edit If your libc supports it, you could try experimenting with M_TRIM_THRESHOLD. Even if you do follow this path, it's going to be tricky (a single used block sitting close to the top of the heap would prevent all free memory below it from being released to the OS).
Generally free() doesn't give back physical memory to OS, they are still mapped in your process's virtual memory. If you allocate a big chunk of memory, libc may allocate it by mmap(); then if you free it, libc may release the memory to OS by munmap(), in this case, top will show that your memory usage comes down.
So, if you want't to release memory to OS explicitly, you can use mmap()/munmap().
When you free() memory, it is returned to the standard C library's pool of memory, and not returned to the operating system. In the vision of the operating system, as you see it through top, the process is still "using" this memory. Within the process, the C library has accounted for the memory and could return the same pointer from malloc() in the future.
I will explain it some more with a different beginning:
During your calls to malloc, the standard library implementation may determine that the process does not have enough allocated memory from the operating system. At that time, the library will make a system call to receive more memory from the operating system to the process (for example, sbrk() or VirtualAlloc() system calls on Unix or Windows, respectively).
After the library requests additional memory from the operating system, it adds this memory to its structure of memory available to return from malloc. Later calls to malloc will use this memory until it runs out. Then, the library asks the operating system for even more memory.
When you free memory, the library usually does not return the memory to the operating system. There are many reasons for this. One reason is that the library author believed you will call malloc again. If you will not call malloc again, your program will probably end soon. Either case, there is not much advantage to return the memory to the operating system.
Another reason that the library may not return the memory to the operating system is that the memory from operating system is allocated in large, contiguous ranges. It could only be returned when an entire contiguous range is no longer in use. The pattern of calling malloc and free may not clear the entire range of use.
Two problems:
In make_cache_db(), the line
for (int i=0; i = n; i++)
should probably read
for (int i=0; i<n; i++)
Otherwise, you'll only allocate a single cache_db_s node.
The way you're assigning cache_db in make_cache_db() seems to be buggy. It seems that your intention is to return a pointer to the first element of the linked list; but because you're reassigning cache_db in every iteration of the loop, you'll end up returning a pointer to the last element of the list.
If you later free the list using free_cache_db(), this will cause you to leak memory. At the moment, though, this problem is masked by the bug described in the previous bullet point, which causes you to allocate lists of only length 1.
Independent of these bugs, the point raised by aix is very valid: The runtime library need not return all free()d memory to the operating system.

Heap size limitation in C

I have a doubt regarding heap in program execution layout diagram of a C program.
I know that all the dynamically allocated memory is allotted in heap which grows dynamically. But I would like to know what is the max heap size for a C program ??
I am just attaching a sample C program ... here I am trying to allocate 1GB memory to string and even doing the memset ...
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
char *temp;
mybuffer=malloc(1024*1024*1024*1);
temp = memset(mybuffer,0,(1024*1024*1024*1));
if( (mybuffer == temp) && (mybuffer != NULL))
printf("%x - %x\n", mybuffer, &mybuffer[((1024*1024*1024*1)-1)]]);
else
printf("Wrong\n");
sleep(20);
free(mybuffer);
return 0;
}
If I run above program in 3 instances at once then malloc should fail atleast in one instance [I feel so] ... but still malloc is successfull.
If it is successful can I know how the OS takes care of 3GB of dynamically allocated memory.
Your machine is very probably overcomitting on RAM, and not using the memory until you actually write it. Try writing to each block after allocating it, thus forcing the operating system to ensure there's real RAM mapped to the address malloc() returned.
From the linux malloc page,
BUGS
By default, Linux follows an optimistic memory allocation strategy.
This means that when malloc() returns non-NULL there is no guarantee
that the memory really is available. This is a really bad bug. In
case it turns out that the system is out of memory, one or more pro‐
cesses will be killed by the infamous OOM killer. In case Linux is
employed under circumstances where it would be less desirable to sud‐
denly lose some randomly picked processes, and moreover the kernel ver‐
sion is sufficiently recent, one can switch off this overcommitting
behavior using a command like:
# echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files vm/overcommit-
accounting and sysctl/vm.txt.
You're mixing up physical memory and virtual memory.
http://apollo.lsc.vsc.edu/metadmin/references/sag/x1752.html
http://en.wikipedia.org/wiki/Virtual_memory
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
Malloc will allocate the memory but it does not write to any of it. So if the virtual memory is available then it will succeed. It is only when you write something to it will the real memory need to be paged to the page file.
Calloc if memory serves be correctly(!) write zeros to each byte of the allocated memory before returning so will need to allocate the pages there and then.

Resources