Memory allocated not used unless initialized? - c

This is a followup to the question I just asked here.
I've created a simple program to help myself understand memory allocation, malloc() and free(). Notice the commented out free line. I created an intentional memory leak so I can watch the Windows reported "Mem Usage" bloat slowly to 1GB. But then I found something stranger. If I comment out the loop just above the free line, so that I don't initialize my storage block with random ints, it appears that the space doesn't actually get "claimed" by the program, from the OS. Why is this?
Sure, I haven't initialized it the block, but I have claimed it, so shouldn't the OS still see that the program is using 1GB, whether or not that GB is initialized?
#include <stdio.h>
#include <stdlib.h>
void alloc_one_meg() {
int *pmeg = (int *) malloc(250000*sizeof(int));
int *p = pmeg;
int i;
// for (i=0; i<250000; i++) /* removing this loop causes memory to not be used? */
// *p++ = rand();
// free((void *)pmeg); /* removing this line causes memory leak! */
}
main()
{
int i;
for (i=0; i<1000; i++) {
alloc_one_meg();
}
}

Allocated memory can be in two states in Windows: Reserved, and Commited (see the documentation of VirtualAlloc about MEM_RESERVE: "Reserves a range of the process's virtual address space without allocating any actual physical storage in memory or in the paging file on disk.").
If you allocate memory but do not use it, it remains in the Reserved state, and the OS doesn't count that as used memory. When you try to use it (whether it is only on write, or on both read and write, I do not know, you might want to do a test to find out), it turns into Commited memory, and the OS counts it as used.
Also, the memory allocated by malloc will not be full of 0's (actually it may happen to be, but it's not guaranteed), because you have not initialised it.

it could be compiler optimization : the memory is not used at all, so a possible optimization is to not allocate this memory depening on compiler and on optimization options.
i tested your code, the line : free((void *)pmeg); does not cause any memory leak for me.

Related

Is it really important to free allocated memory if the program's just about to exit? [duplicate]

This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Closed 7 years ago.
I understand that if you're allocating memory to store something temporarily, say in response to a user action, and by the time the code gets to that point again you don't need the memory anymore, you should free the memory so it doesn't cause a leak. In case that wasn't clear, here's an example of when I know it's important to free memory:
#include <stdio.h>
#include <stdlib.h>
void countToNumber(int n)
{
int *numbers = malloc(sizeof(int) * n);
int i;
for (i=0; i<n; i++) {
numbers[i] = i+1;
}
for (i=0; i<n; i++) {
// Yes, simply using "i+1" instead of "numbers[i]" in the printf would make the array unnecessary.
// But the point of the example is using malloc/free, so pretend it makes sense to use one here.
printf("%d ", numbers[i]);
}
putchar('\n');
free(numbers); // Freeing is absolutely necessary here; this function could be called any number of times.
}
int main(int argc, char *argv[])
{
puts("Enter a number to count to that number.");
puts("Entering zero or a negative number will quit the program.");
int n;
while (scanf("%d", &n), n > 0) {
countToNumber(n);
}
return 0;
}
Sometimes, however, I'll need that memory for the whole time the program is running, and even if I end up allocating more for the same purpose, the data stored in the previously-allocated memory is still being used. So the only time I'd end up needing to free the memory is just before the program exits.
But if I don't end up freeing the memory, would that really cause a memory leak? I'd think the operating system would reclaim the memory as soon as the process exits. And even if it doesn't cause a memory leak, is there another reason it's important to free the memory, provided this isn't C++ and there isn't a destructor that needs to be called?
For example, is there any need for the free call in the below example?
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
void *ptr = malloc(1024);
// do something with ptr
free(ptr);
return 0;
}
In that case the free isn't really inconvenient, but in cases where I'm dynamically allocating memory for structures that contain pointers to other dynamically-allocated data, it would be nice to know I don't need to set up a loop to do it. Especially if the pointer in the struct is to an object with the same struct, and I'd need to recursively delete them.
Generally, the OS will reclaim the memory, so no, you don't have to free() it. But it is really good practice to do it, and in some cases it may actually make a difference. Couple of examples:
You execute your program as a subprocess of another process. Depending on how that is done (see comments below), the memory won't be freed until the parent finishes. If the parent never finishes, that's a permanent leak.
You change your program to do something else. Now you need to hunt down every exit path and free everything, and you'll likely forget some.
Reclaiming the memory is of OS' volition. All major ones do it, but if you port your program to another system it may not.
Static analysis and debug tools work better with correct code.
If the memory is shared between processes, it may only be freed after all processes terminate, or possibly not even then.
By the way, this is just about memory. Freeing other resources, such as closing a file (fclose()) is much more important, as some OSes (Windows) don't properly flush the stream.

'calloc' does not automatically consumes memory out of RAM

According to the answer to this question:
Difference between malloc and calloc?
Isak Savo explains that:
calloc does indeed touch the memory (it writes zeroes on it) and thus you'll be sure the OS is backing the allocation with actual RAM (or swap). This is also why it is slower than malloc (not only does it have to zero it, the OS must also find a suitable memory area by possibly swapping out other processes)
So, I decided to try it myself:
#include <stdlib.h>
#include <stdio.h>
#define ONE_MB = 1048576
int main() {
int *p = calloc(ONE_MB, sizeof(int));
int n;
for(n = 0; n != EOF; n = getchar()) ; /* Gives me time to inspect the process */
free(p);
return 0;
}
After executing this application, Windows's Task Manager would tell me that only 352 KB were being used out of RAM.
It appears that the 1MB block I allocated is not being backed with RAM by the OS.
On the other hand, however, if I would call malloc and initialize the array manually:
#include <stdlib.h>
#include <stdio.h>
#define ONE_MB = 1048576
int main() {
int *p = malloc(sizeof(int) * ONE_MB);
int n;
/* Manual Initialization */
for(n = 0; n < ONE_MB; n++)
memory[n] = n;
for(n = 0; n != EOF; n = getchar()) ; /* Gives me time to inspect the process */
free(p);
return 0;
}
Task Manager would show me that there is actually 4.452KB of RAM being used by the application.
Was Isak incorrect about his argument? If so, what does calloc do then? Doesn't it zero the whole memory block, and therefore "touches" it, just as I did?
If that's the case, why isn't RAM being used in the first sample?
He was wrong in the point, that it is much slower because of it has to write 0 in the block first.
Any smart coded OS prepares such blocks for such purposes (where calloc() isn't the only case such blocks are used for)
and if you call calloc() it just assigns such a block of zeroed memory to your process instead of a uninitialized one as it >could< do by calling malloc().
So it handles such blocks of memory the same way. and if the compiler/OS decides you don't ever/yet need the full 1MB it also dont' gives you a full 1MB block of the zeroed memory.
In How far he was right:
If you heavily call calloc() and also use the memory, the OS could go out of zeroed memory which was probably prepared in some idle time.
This would be causing indeed the system to get a bit slower, as than the os is forced by a call to calloc() to write 0's in the block first.
But at all: There is no regulation about whether malloc/calloc have to allocate the memory on the call or just as you are using the memory. So your special example depends on the OS treatment.

Why is not freeing memory bad practice?

int a = 0;
int *b = malloc (sizeof(int));
b = malloc (sizeof(int));
The above code is bad because it allocates memory on the heap and then doesn't free it, meaning you lose access to it. But you also created 'a' and never used it, so you also allocated memory on the stack, which isn't freed until the scope ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Note: I know that memory on the stack can't be freed, I want to know why its not considered bad.
The stack memory will get released automatically when the scope ends. The memory allocated on the heap will remain occupied unless you release it explicitly. As an example:
void foo(void) {
int a = 0;
void *b = malloc(1000);
}
for (int i=0; i<1000; i++) {
foo();
}
Running this code will decrease the available memory by 1000*1000 bytes required by b, whereas the memory required by a will always get released automatically when you return from the foo call.
Simple: Because you'll leak memory. And memory leaks are bad. Leaks: bad, free: good.
When calling malloc or calloc, or indeed any *alloc function, you're claiming a chunk of memory (the size of which is defined by the arguments passed to the allocating function).
Unlike stack variables, which reside in a portion of memory the program has, sort of, free reign over, the same rules don't apply to heap memory. You may need to allocate heap memory for any number of reasons: the stack isn't big enough, you need an array of pointers, but have no way of knowing how big this array will need to be at compile time, you need to share some chunk of memory (threading nightmares), a struct that requires the members to be set at various places (functions) in your program...
Some of these reasons, by their very nature, imply that the memory can't be freed as soon as pointer to that memory goes out of scope. Another pointer might still be around, in another scope, that points to the same block of memory.
There is, though, as mentioned in one of the comments, a slight drawback to this: heap memory requires not just more awareness on the programmers part, but it's also more expensive, and slower than working on the stack.
So some rules of thumb are:
You claimed the memory, so you take care of it... you make sure it's freed when you're done playing around with it.
Don't use heap memory without a valid reason. Avoiding stack overflow, for example, is a valid reason.
Anyway,
Some examples:
Stack overflow:
#include <stdio.h>
int main()
{
int foo[2000000000];//stack overflow, array is too large!
return 0;
}
So, here we've depleted the stack, we need to allocate the memory on the heap:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *foo= malloc(2000000000*sizeof(int));//heap is bigger
if (foo == NULL)
{
fprintf(stderr, "But not big enough\n");
}
free(foo);//free claimed memory
return 0;
}
Or, an example of an array, whose length depends on user input:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *arr = NULL;//null pointer
int arrLen;
scanf("%d", &arrLen);
arr = malloc(arrLen * sizeof(int));
if (arr == NULL)
{
fprintf(stderr, "Not enough heap-mem for %d ints\n", arrLen);
exit ( EXIT_FAILURE);
}
//do stuff
free(arr);
return 0;
}
And so the list goes on... Another case where malloc or calloc is useful: An array of strings, that all might vary in size. Compare:
char str_array[20][100];
In this case str_array is an array of 20 char arrays (or strings), each 100 chars long. But what if 100 chars is the maximum you'll ever need, and on average, you'll only ever use 25 chars, or less?
You're writing in C, because it's fast and your program won't use any more resources than it actually needs? Then this isn't what you actually want to be doing. More likely, you want:
char *str_array[20];
for (int i=0;i<20;++i) str_array[i] = malloc((someInt+i)*sizeof(int));
Now each element in the str_array has exactly the amount of memory I need allocated too it. That's just way more clean. However, in this case calling free(str_array) won't cut it. Another rule of thumb is: Each alloc call has to have a free call to match it, so deallocating this memory looks like this:
for (i=0;i<20;++i) free(str_array[i]);
Note:
Dynamically allocated memory isn't the only cause for mem-leaks. It has to be said. If you read a file, opening a file pointer using fopen, but failing to close that file (fclose) will cause a leak, too:
int main()
{//LEAK!!
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a buggy program");
return 0;
}
Will compile and run just fine, but it will contain a leak, that is easily plugged (and it should be plugged) by adding just one line:
int main()
{//OK
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a bug-free(?) program");
fclose(fp);
return 0;
}
As an asside: if the scope is really long, chances are you're trying to cram too much into a single function. Even so, if you're not: you can free up claimed memory at any point, it needn't be the end of the current scope:
_Bool some_long_f()
{
int *foo = malloc(2000000000*sizeof(int));
if (foo == NULL) exit(EXIT_FAILURE);
//do stuff with foo
free(foo);
//do more stuff
//and some more
//...
//and more
return true;
}
Because stack and heap, mentioned many times in the other answers, are sometimes misunderstood terms, even amongst C programmers, Here is a great conversation discussing that topic....
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Memory on the stack, such as memory allocated to automatic variables, will be automatically freed upon exiting the scope in which they were created.
whether scope means global file, or function, or within a block ( {...} ) within a function.
But memory on the heap, such as that created using malloc(), calloc(), or even fopen() allocate memory resources that will not be made available to any other purpose until you explicity free them using free(), or fclose()
To illustrate why it is bad practice to allocate memory without freeing it, consider what would happen if an application were designed to run autonomously for very long time, say that application was used in the PID loop controlling the cruise control on your car. And, in that application there was un-freed memory, and that after 3 hours of running, the memory available in the microprocessor is exhausted, causing the PID to suddenly rail. "Ah!", you say, "This will never happen!" Yes, it does. (look here). (not exactly the same problem, but you get the idea)
If that word picture doesn't do the trick, then observe what happens when you run this application (with memory leaks) on your own PC. (at least view the graphic below to see what it did on mine)
Your computer will exhibit increasingly sluggish behavior until it eventually stops working. Likely, you will be required to re-boot to restore normal behavior.
(I would not recommend running it)
#include <ansi_c.h>
char *buf=0;
int main(void)
{
long long i;
char text[]="a;lskdddddddd;js;'";
buf = malloc(1000000);
strcat(buf, "a;lskdddddddd;js;dlkag;lkjsda;gkl;sdfja;klagj;aglkjaf;d");
i=1;
while(strlen(buf) < i*1000000)
{
strcat(buf,text);
if(strlen(buf) > (i*10000) -10)
{
i++;
buf = realloc(buf, 10000000*i);
}
}
return 0;
}
Memory usage after just 30 seconds of running this memory pig:
I guess that has to do with scope 'ending' really often (at the end of a function) meaning if you return from that function creating a and allocating b, you will have freed in a sense the memory taken by a, and lost for the remainder of the execution memory used by b
Try calling that function a a handful of times, and you'll soon exhaust all of your memory. This never happens with stack variables (except in the case of a defectuous recursion)
Memory for local variables automatically is reclaimed when the function is left (by resetting the frame pointer).
The problem is that memory you allocate on the heap never gets freed until your program ends, unless you explicitly free it. That means every time you allocate more heap memory, you reduce available memory more and more, until eventually your program runs out (in theory).
Stack memory is different because it's laid-out and used in a predictable pattern, as determined by the compiler. It expands as needed for a given block, then contracts when the block ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Imagine the following:
while ( some_condition() )
{
int x;
char *foo = malloc( sizeof *foo * N );
// do something interesting with x and foo
}
Both x and foo are auto ("stack") variables. Logically speaking, a new instance for each is created and destroyed in each loop iteration1; no matter how many times this loop runs, the program will only allocate enough memory for a single instance of each.
However, each time through the loop, N bytes are allocated from the heap, and the address of those bytes is written to foo. Even though the variable foo ceases to exist at the end of the loop, that heap memory remains allocated, and now you can't free it because you've lost the reference to it. So each time the loop runs, another N bytes of heap memory is allocated. Over time, you run out of heap memory, which may cause your code to crash, or even cause a kernel panic depending on the platform. Even before then, you may see degraded performance in your code or other processes running on the same machine.
For long-running processes like Web servers, this is deadly. You always want to make sure you clean up after yourself. Stack-based variables are cleaned up for you, but you're responsible for cleaning up the heap after you're done.
1. In practice, this (usually) isn't the case; if you look at the generated machine code, you'll (usually) see the stack space allocated for x and foo at function entry. Usually, space for all local variables (regardless of their scope within the function) is allocated at once.

Understanding memory allocation, test program crashing

I am just about finished reading K&R, and that is all the C that I know. All my compilation is done from Windows command line using MinGW, and I have no knowledge of advanced debugging methods (hence the "ghetto debug" comment in my 2nd program below).
I am trying to make a few small test programs to help me better understand how memory allocation works. These first couple programs do not use malloc or free, I just wanted to see how memory is allocated and de-allocated for standard arrays local to a function. The idea is that I watch my running processes RAM usage to see if it corresponds with what I understand. For this first program below, it does work as I expected. The alloc_one_meg() function allocates and initializes 250,000 4-byte integers, but that MB is de-allocated as soon as the function returns. So if I call that function 1000000 times in a row, I should never see my RAM usage go much above 1MB. And, it works.
#include <stdio.h>
#include <stdlib.h>
void alloc_one_meg() {
int megabyte[250000];
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
}
main()
{
int i;
for (i=0; i<1000000; i++) {
alloc_one_meg();
}
}
For this second program below, the idea was to not allow the function to exit, to have 1000 copies of the same function running at once, which I accomplished with recursion. My theory was that the program would consume 1GB of RAM before it de-allocated it all after the recursion finished. However, it doesn't get past the 2nd loop through the recursion (see my ghetto debug comment). The program crashes with a pretty non-informative (to me) message (a Windows pop-up saying ____.exe has encountered a problem). Usually I can always get to the bottom of things with my ghetto debug method... but it's not working here. I'm stumped. What is the problem with this code? Thanks!
#include <stdio.h>
#include <stdlib.h>
int j=0;
void alloc_one_meg() {
int megabyte[250000];
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
j++;
printf("Loop %d\n", j); // ghetto debug
if (j<1000) {
alloc_one_meg();
}
}
main()
{
alloc_one_meg();
}
Followup question posted here.
You're running into a stack overflow.
Local automatic storage variables (such as megabyte) are allocated on the stack, which has limited amount of space. malloc allocates on the heap, which allows much larger allocations.
You can read more here:
http://en.wikipedia.org/wiki/Stack_overflow
(I should note that the C language does not specify where memory is allocated - stack and heap are implementation details)
The size of the stack in a Windows program is usually around 1 MB, so on the second recursion, you're overflowing the stack. You shouldn't be allocating such large arrays on the stack, use malloc and free to allocate and deallocate the memory on the heap (there's no way to get around malloc for such sizes of arrays):
void alloc_one_meg() {
int *megabyte = malloc(sizeof(int) * 250000); // allocate space for 250000
// ints on the heap
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
j++;
printf("Loop %d\n", j); // ghetto debug
if (j<1000) {
alloc_one_meg();
}
free(megabyte); // DO NOT FORGET THIS
}
That said, you can actually change the stack size of a program and make it bigger (though I'd only do so as an educational exercise, not in production code). For Visual Studio you can use the /F compiler option, and on linux you can use setrlimit(3). I'm not sure what to use with MinGW though.
The memory you are allocating via the recursive functional calls is allocated from the stack. All of the stack memory must be contiguous. When your process starts a thread, Windows will reserve a range of virtual memory address space for that thread's stack. The amount of memory to be reserved is specified in your EXE file's "PE header." PE stands for "Portable Executable."
Using the dumpbin utility included with Visual Studio, with itself (dumpbin.exe) as the input file:
dumpbin /headers dumpbin.exe
... there is some output, and then:
100000 size of stack reserve
2000 size of stack commit
The "100000" is a hexadecimal number equal to 1,048,576, so this represents around 1MB.
In other words, the operating system will only reserve a 1MB address range for the stack. When that address range is used up, Windows may or may not be able to allocate further consecutive memory ranges to increase the stack. The result depends on whether further contiguous address range is available. It is very unlikely to be available, due to the other allocations Windows made when the thread began.
To allocate a maximum amount of virtual memory under Windows, use the VirtualAlloc family of functions.
StackOverflow. Is this a trick question?

Problem usage memory in C

Please help :)
OS : Linux
Where in " sleep(1000);", at this time "top (display Linux tasks)" wrote me 7.7 %MEM use.
valgrind : not found memory leak.
I understand, wrote correctly and all malloc result is NULL.
But Why in this time "sleep" my program NOT decreased memory ? What missing ?
Sorry for my bad english, Thanks
~ # tmp_soft
For : Is it free?? no
Is it free?? yes
For 0
For : Is it free?? no
Is it free?? yes
For 1
END : Is it free?? yes
END
~ #top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23060 root 20 0 155m 153m 448 S 0 7.7 0:01.07 tmp_soft
Full source : tmp_soft.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
struct cache_db_s
{
int table_update;
struct cache_db_s * p_next;
};
void free_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t;
while (*cache_db != NULL)
{
cache_db_t = *cache_db;
*cache_db = (*cache_db)->p_next;
free(cache_db_t);
cache_db_t = NULL;
}
printf("Is it free?? %s\n",*cache_db==NULL?"yes":"no");
}
void make_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t = NULL;
int n = 10000000;
for (int i=0; i = n; i++)
{
if ((cache_db_t=malloc(sizeof(struct cache_db_s)))==NULL) {
printf("Error : malloc 1 -> cache_db_s (no free memory) \n");
break;
}
memset(cache_db_t, 0, sizeof(struct cache_db_s));
cache_db_t->table_update = 1; // tmp
cache_db_t->p_next = *cache_db;
*cache_db = cache_db_t;
cache_db_t = NULL;
}
}
int main(int argc, char **argv)
{
struct cache_db_s * cache_db = NULL;
for (int ii=0; ii 2; ii++) {
make_cache_db(&cache_db);
printf("For : Is it free?? %s\n",cache_db==NULL?"yes":"no");
free_cache_db(&cache_db);
printf("For %d \n", ii);
}
printf("END : Is it free?? %s\n",cache_db==NULL?"yes":"no");
printf("END \n");
sleep(1000);
return 0;
}
For good reasons, virtually no memory allocator returns blocks to the OS
Memory can only be removed from your program in units of pages, and even that is unlikely to be observed.
calloc(3) and malloc(3) do interact with the kernel to get memory, if necessary. But very, very few implementations of free(3) ever return memory to the kernel1, they just add it to a free list that calloc() and malloc() will consult later in order to reuse the released blocks. There are good reasons for this design approach.
Even if a free() wanted to return memory to the system, it would need at least one contiguous memory page in order to get the kernel to actually protect the region, so releasing a small block would only lead to a protection change if it was the last small block in a page.
Theory of Operation
So malloc(3) gets memory from the kernel when it needs it, ultimately in units of discrete page multiples. These pages are divided or consolidated as the program requires. Malloc and free cooperate to maintain a directory. They coalesce adjacent free blocks when possible in order to be able to provide large blocks. The directory may or may not involve using the memory in freed blocks to form a linked list. (The alternative is a bit more shared-memory and paging-friendly, and it involves allocating memory specifically for the directory.) Malloc and free have little if any ability to enforce access to individual blocks even when special and optional debugging code is compiled into the program.
1. The fact that very few implementations of free() attempt to return memory to the system is not at all due to the implementors slacking off.Interacting with the kernel is much slower than simply executing library code, and the benefit would be small. Most programs have a steady-state or increasing memory footprint, so the time spent analyzing the heap looking for returnable memory would be completely wasted. Other reasons include the fact that internal fragmentation makes page-aligned blocks unlikely to exist, and it's likely that returning a block would fragment blocks to either side. Finally, the few programs that do return large amounts of memory are likely to bypass malloc() and simply allocate and free pages anyway.
If you're trying to establish whether your program has a memory leak, then top isn't the right tool for the job (valrind is).
top shows memory usage as seen by the OS. Even if you call free, there is no guarantee that the freed memory would get returned to the OS. Typically, it wouldn't. Nonetheless, the memory does become "free" in the sense that your process can use it for subsequent allocations.
edit If your libc supports it, you could try experimenting with M_TRIM_THRESHOLD. Even if you do follow this path, it's going to be tricky (a single used block sitting close to the top of the heap would prevent all free memory below it from being released to the OS).
Generally free() doesn't give back physical memory to OS, they are still mapped in your process's virtual memory. If you allocate a big chunk of memory, libc may allocate it by mmap(); then if you free it, libc may release the memory to OS by munmap(), in this case, top will show that your memory usage comes down.
So, if you want't to release memory to OS explicitly, you can use mmap()/munmap().
When you free() memory, it is returned to the standard C library's pool of memory, and not returned to the operating system. In the vision of the operating system, as you see it through top, the process is still "using" this memory. Within the process, the C library has accounted for the memory and could return the same pointer from malloc() in the future.
I will explain it some more with a different beginning:
During your calls to malloc, the standard library implementation may determine that the process does not have enough allocated memory from the operating system. At that time, the library will make a system call to receive more memory from the operating system to the process (for example, sbrk() or VirtualAlloc() system calls on Unix or Windows, respectively).
After the library requests additional memory from the operating system, it adds this memory to its structure of memory available to return from malloc. Later calls to malloc will use this memory until it runs out. Then, the library asks the operating system for even more memory.
When you free memory, the library usually does not return the memory to the operating system. There are many reasons for this. One reason is that the library author believed you will call malloc again. If you will not call malloc again, your program will probably end soon. Either case, there is not much advantage to return the memory to the operating system.
Another reason that the library may not return the memory to the operating system is that the memory from operating system is allocated in large, contiguous ranges. It could only be returned when an entire contiguous range is no longer in use. The pattern of calling malloc and free may not clear the entire range of use.
Two problems:
In make_cache_db(), the line
for (int i=0; i = n; i++)
should probably read
for (int i=0; i<n; i++)
Otherwise, you'll only allocate a single cache_db_s node.
The way you're assigning cache_db in make_cache_db() seems to be buggy. It seems that your intention is to return a pointer to the first element of the linked list; but because you're reassigning cache_db in every iteration of the loop, you'll end up returning a pointer to the last element of the list.
If you later free the list using free_cache_db(), this will cause you to leak memory. At the moment, though, this problem is masked by the bug described in the previous bullet point, which causes you to allocate lists of only length 1.
Independent of these bugs, the point raised by aix is very valid: The runtime library need not return all free()d memory to the operating system.

Resources