#include <stdio.h>
#include <string.h>
#include <stdlib.h>
struct Person
{
unsigned long age;
char name[20];
};
struct Array
{
struct Person someone;
unsigned long used;
unsigned long size;
};
int main()
{
//pointer to array of structs
struct Array** city;
//creating heap for one struct Array
struct Array* people=malloc(sizeof(struct Array));
city=&people;
//initalizing a person
struct Person Rob;
Rob.age=5;
strcpy(Rob.name,"Robert");
//putting the Rob into the array
people[0].someone=Rob;
//prints Robert
printf("%s\n",people[0].someone.name);
//another struct
struct Person Dave;
Dave.age=19;
strcpy(Dave.name,"Dave");
//creating more space on the heap for people.
people=realloc(people,sizeof(struct Array)*2);
//How do I know that this data is safe in memory from being overwritten?
people[1].someone=Dave;
//prints Dave
printf("%s\n",people[1].someone.name);
//accessing memory on the heap I do not owe?
people[5].someone=Rob;
//prints "Robert" why is this okay? Am I potentially overwriting memory?
printf("%s\n",people[5].someone.name);
return 0;
}
In the above code I attempt to make a pointer to a dynamic array of structs, unsure if I succeeded in that part, but my main concern is I use malloc to create space on the heap for the array 'people.' Later in the code I create another struct Person and use realloc to create more space on the heap for 'people.' I then write to memory outside of what I thought I gave space for by doing 'people[5].someone=Rob;.' This still works as I have access to the value at that memory location. My question is why does this work? Am I potentially overwriting memory by writing to memory I did not specifically define for people? Am I actually using malloc and realloc correctly? As I did hear that there were ways of testing if they were successful in another post. I am new to C so if my assumptions or terminology is off please correct me.
I'm not an expert in C, not even a middle, most of the time I program in C#, so some mistakes might be there.
Modern operating systems have a special mechanism called the memory manager. Using that mechanism we can ask OS to give us some amount of memory. In Windows there's a special function for that - VirtualAlloc. It's a really powerful function, you can read more about it on MSDN.
It works really great and gives us all the memory we require but there's a little problem - it gives us the whole physical pages (4KB). Well, actually that is not a big problem, you can use this memory in the same way as if it was allocated using malloc. There'll be no error.
But it is a problem because if we, for example, allocate a 10 byte chunk using VirtualAlloc, it will actually give us 4096 byte chunk as the memory size is rounded up to the page size boundary. So VirtualAlloc allocate a 4KB memory chunk, but we actually use only 10 bytes of it. The rest 4086 are "gone". If we create the second 10 byte array, VirtualAlloc will give us another 4096 byte chunk, so two 10 byte arrays will actually take 8KB of RAM.
To solve this problem, every C program uses malloc function, which is a part of the C runtime library. It allocates some space using VirtualAlloc and returns pointers to the parts of it. For example let's return to our previous arrays. If we allocate 10 byte array using malloc, the runtime library will call VirtualAlloc to allocate some space, and malloc will return pointer to the beginning of it. But if we allocate 10 byte array for the second time, malloc won't use VirtualAlloc. Instead, it will use the already allocated page, I mean the free space of it. After allocation of the first array, we got 4086 bytes of unused space in our memory chunk. So malloc will use this space wisely. In this case (for the second array) it will return pointer to "address of chunk" + 10 (that's a memory address).
Now we can allocate about 400 "ten byte arrays" and they will take only 4096 bytes if we use malloc. Naive way using VirtualAlloc would take 400 * 4096 bytes = 1600KB, that's a rather big figure in comparison to 4096 bytes using malloc.
There's another reason - performance, as VirtualAlloc is a really expensive operation. However, malloc will do some pointer math if you have free space in the allocated chunks, but if you don't have any free allocated space, it will call VirtualAlloc. Actually it's much more complicated than I say, but I think that would be enough to explain the cause.
Okay, let's return to the question. You allocate the memory for the Array array. Let's calculate it's size: sizeof(Person) = sizeof(long) + sizeof(char[20]) = 4 + 20 = 24 bytes; sizeof(Array) = sizeof(Person) + 2 * sizeof(long) = 24 + 8 = 32 bytes. The array of 2 elements will take 32 * 2 = 64 bytes. So, as I said before, malloc will call VirtualAlloc to allocate some memory, and it will return a 4096 bytes page. So, for example let's assume that the address of the chunk's beginning is 0. Application can modify any byte from 0 to 4096 as we allocated the page and we won't get any pagefault. What is an array indexation array[n]? It's just summation of the array's base and the offset calculated as array + n * sizeof(*array). In case of person[5] it will be 0 + sizeof(Array) * 5 = 0 + 5 * 64 = 320 bytes. Gotcha! We're still in the chunk's boundary, I mean we access the existing physical page. A pagefault would happen if we tried to access an unexisting virtual page, but in our case it exists at address 320 (from 0 to 4096 as we assumed). It's dangerous to access unallocated space as it can lead to lots of unknown consequences, but we can actually do it!
That's why you don't get any Access Violation at ****. But it's actually MUCH WORSE. Because if you for example try accessing the zero pointer you will get a pagefault and your app will just crash, therefore you WILL know the cause of the problem with a help of the debugger or something else. But if you overrun the buffer and you don't get any error, you will go crazy while looking for the problem's cause. Because it's REALLY HARD to find this kind of errors. And you can even be NOT AWARE of it. So NEVER OVERRUN THE BUFFER ALLOCATED IN THE HEAP. Actually Microsoft's C Runtime has a special "debug" version of malloc that can find these errors at runtime, but you need to compile application with "DEBUG" configuration. Also, there are some special thing like Valgrind, but I have a little experience in these stuff.
Well, I have written alot, sorry for my english, I'm still learning it. Hope it will help you.
First, never forget to free your memory.
// NEVER FORGET TO FREE YOUR MEMORY
free(people);
As for this part
//accessing memory on the heap I do not owe?
people[5].someone=Rob;
//prints "Robert" why is this okay? Am I potentially overwriting memory?
printf("%s\n",people[5].someone.name);
you are just being lucky (or unlucky in my opinion, since you don't see the logical mistake you are doing).
This is undefined behaviour, since you have two cells, but you access a 6th one, you go out of bounds.
Related
I have pointer to buffer that was initialised with calloc:
notEncryptBuf = (unsigned char*) calloc(1024, notEncryptBuf_len);
Later I moved pointer to another position:
notEncryptBuf+=20;
And finally I free buffer:
free(notEncryptBuf);
Will if free whole allocated size? How C knows size of memory it need to free?
The behavior of free is specified only if it is passed an address that was previously returned by malloc or a related routine or is passed a null pointer (in which case it does nothing). If you pass an address modified from an original allocation, as by notEncryptBuf += 20;, the behavior of free is not specified.
C implementations commonly know how much space is in an allocation because they store it in some bytes immediately preceding the allocation. For example, if you ask for 1,024 byes, it may allocate 1,040, record information about the allocation in the first 16 bytes, and return to you the address 16 bytes after the whole allocation. Then, when you pass that address to free, it looks in the 16 bytes before that address to see the amount of space.
Other implementations are theoretically possible. For example, a memory manager could designate one zone of memory for common fixed-size allocations, such as 32 bytes, and then use a bitmap to indicate whether each 32-byte block in that zone is free or allocated. Or it could keep a database of allocations, using a hash table or trees or other data structures. When free is called, it would look up the address in the database.
How C knows size of memory it need to free?
"C" does not know about memory allocation or freeing. It relies on the underlying memory manager to keep track of the allocated memories and free them up.
That said, if you pass a pointer to free() which was not returned by any allocator function, it invokes undefined behaviour. So, you cannot pass the pointer which you have shifted to free(). You need to pass the pointer which was returned by calloc().
A good way to answer these kinds of questions is to ask yourself, “how would I myself write malloc() and free()?”
Suppose you have a “memory pool” — fancy words for just an array of bytes:
unsigned char memory[10000];
Now the user wants eight bytes of that. The user calls ptr = my_malloc(8). You know full well that you can’t just give the user any random spot in your memory array — you can only give away stuff that hasn’t already been given away.
In other words, you somehow need to keep track of what pieces of memory have been given away.
Linked-lists → Variable-sized elements in an array
One way we know of to manage dynamic memory is through linked-lists. A linked list is a block of memory that you organize with a struct:
struct node
{
SOME_TYPE data; // the data to store
struct node * next; // a pointer to the next node
};
However, since we are the memory manager, we don’t have some magic pool to allocate our node. We have to use our own memory[] to create space for the node.
Let’s make a simple modification. Instead of a pointer, we will keep track of how big a piece is. We can do this with a structure:
struct piece_of_memory
{
int size;
unsigned char memory[size];
};
Memory, then, is just an array of those things, where all the sizes add up to our available memory pool size:
piece_of_memory memory[...];
So now our initial pool of memory looks like this:
int size = 9992; // 10000 - sizeof(int), which is minus eight bytes on 64-bit systems
unsigned char memory[9992];
Graphically, that’s something like
[----,------------------------------------------------------------]
↑ ↑
9992 memory
If I give away eight bytes, that gets reordered:
[----,---][----,--------------------------------------------------]
↑ ↑ ↑ ↑
↑ ↑ ↑ memory
↑ ↑ new size = 9992 - 8 - sizeof(int) = 9976
↑ ↑
8 returned from malloc
That is two of those structs in a row
int size = 8
unsigned char memory[8]
int size = 9976
unsigned char memory[9976]
We can verify that the pieces all use exactly 10000 bytes:
{(size) 8 + (8 bytes) 8} + {(size) 8 + (9976 bytes) 9976}
= 16 + 9984
= 10000
So when the user asks us to ptr = my_malloc(8), we find a piece of memory with at least eight available bytes, rearrange things, then return a pointer to the ‘memory’ part (not the ‘size’!).
Freeing allocated memory
Suppose our user is now finished with the eight bytes and calls my_free(ptr).
[----,---][----,--------------------------------------------------]
8 ↑ 9976
↑
free me!
We can find our struct piece_of_memory (it is sizeof(int) bytes before the address returned to us), and we can recombine the free pieces of memory into a whole free block:
[----,------------------------------------------------------------]
9992
Notice how this only works if the user gives us an address we gave it earlier, right? What would happen if I returned a wrong ptr value?
More to think about
Naturally we must also be able to keep track of which blocks are available to return and which ones are in use. This makes our struct piece_of_memory a bit more complicated. We could do something like:
struct piece_of_memory
{
int size;
bool is_used;
unsigned char memory[];
};
We also need a way for the memory manager to search through the memory blocks for a piece that is big enough for the requested size. If we want to be smart about it, we might take some time to find the smallest available block that is big enough for the requested size.
We don’t actually have to keep the (‘size’ and ‘is_used’) with the ‘memory’ pieces, either. We could split up our struct to simply have an array of (‘size’ + ‘is_used’) structures at one end of our memory[] array and all the pieces of returned memory at the other end.
Finally, we must waste a little memory when we divide it up in order to make sure that we always return a pointer that is aligned for the worst-case alignment needs our user might put it to. For example, if user wants to get dynamic memory for an array of double, we don’t want to return something that is byte-aligned.
This isn’t the only way to do it!
This is just one simple way. More advanced structures could certainly be used as well.
Conclusions
Hopefully you can answer your own questions now:
How does the memory manager know how much memory to free?
(Because it keeps track of it.)
Can I return a pointer that was not given to me by the memory manager?
(No, because it would break things.)
Obviously the memory manager can be written to prevent things from breaking if you try to free a pointer it did not give you, but the C specification does not require it to. It requires (expects) the user to not give it bad input.
I am going through this link and learning C. Interesting part on the page:
The real purpose of unions is to prevent memory fragmentation by arranging for a standard size for data in the memory. By having a standard data size we can guarantee that any hole left when dynamically allocated memory is freed will always be reusable by another instance of the same type of union.
I understand this part by the following code:
typedef struct{
char name[100];
int age;
int rollno;
}student;
typedef union{
student *studentPtr;
char *text;
}typeUnion;
int
main(int argc, char **argv)
{
typeUnion union1;
//use union1.studentPtr
union1.text="Welcome to StackOverflow";
//use union1.text
return 0;
}
Well, in the above code union1.text is reusing the space previously used by union1.studentPtr, not completely but still using.
Now, the part I don't understand is, when is the freed up space of malloc can't be used which leads to memory fragmentation?
edit: Going through the comments and answers, it is imperative to use the classic text, adding this edit to the post presuming it will help beginners like me
the comments have more expertise regarding unions in general.
Regarding your question specifically, this is my understanding:
union sets aside memory for the largest datatype in the union variable. So for example having a short int and a long int in the union will set aside enough memory for a long int
Imagine instead of union you declare a short int variable.
But then need to use a long int. So you use free on the short int
Then you use malloc to allocate memory for a long int. This has to be continguous memory. So now your memory looks like this.
With a free byte in the middle of an otherwise used block of memory. Sitting there waiting for you to request specifically 1 byte of memory.
Aside: If you're learning c I recommend the classic text. It's dated but I love the simplicity, clarity and text-book style approach.
That page is wrong, most programs simply assume memory usage will be fine and don't pay any attention to it.
In the following diagrams, assume each character represents, say 8 bytes. Letters represent different allocations, and underscores represent free memory. The scales involved are ludicrously small, and I'm skipping over details (like allocation metadata used by most malloc implementations), but the principles should be there.
Start with empty memory
_____________________________________________________
Then a bunch of 32-byte allocations occur as a program runs.
AAAABBBBCCCCDDDDEEEEFFFFGGGGHHHHIIIIJJJJKKKKLLLL________
The memory allocated by the program extends all the way past the last byte of the 'L' allocation, it's using 12*8*4 = 384 bytes.
Now the program frees every other allocation.
AAAA____CCCC____EEEE____GGGG____IIII____KKKK__________
Now the program is really only using 6*4*8 = 192 bytes, but the operating system has to keep all 352 bytes from the first 'A' to the last 'K' allocated for the program. Those freed gaps in between all the allocations are an example of memory fragmentation.
Now the program wants to allocate another 32-byte block. It could happen like this:
AAAAMMMMCCCC____EEEE____GGGG____IIII____KKKK_________
The new allocation fits in one of the gaps created by the frees, and this is fine since it recycles one of the gaps so we're wasting less space.
Now say the program needs to allocate a 40 byte block. None of the gaps is big enough, so the allocation has to go at the end and the operating system has to allocate more memory for the program, 352+40=392 bytes. All of the memory in those gaps is being wasted. This is the kind of waste due to memory fragmentation the webpage is talking about.
If all of the allocations had been 40 bytes to start with, then gap recycling could be maximized.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *arr = (int*)malloc(10);
int i;
for(i=0;i<100;i++)
{
arr[i]=i;
printf("%d", arr[i]);
}
return 0;
}
I am running above program and a call to malloc will allocate 10 bytes of memory and since each int variable takes up 2 bytes so in a way I can store 5 int variables of 2 bytes each thus making up my total 10 bytes which I dynamically allocated.
But on making a call to for-loop it is allowing me to enter values even till 99th index and storing all these values as well. So in a way if I am storing 100 int values it means 200 bytes of memory whereas I allocated only 10 bytes.
So where is the flaw with this code or how does malloc behave? If the behaviour of malloc is non-deterministic in such a manner then how do we achieve proper dynamic memory handling?
The flaw is in your expectations. You lied to the compiler: "I only need 10 bytes" when you actually wrote 100*sizeof(int) bytes. Writing beyond an allocated area is undefined behavior and anything may happen, ranging from nothing to what you expect to crashes.
If you do silly things expect silly behaviour.
That said malloc is usually implemented to ask the OS for chunks of memory that the OS prefers (like a page) and then manages that memory. This speeds up future mallocs especially if you are using lots of mallocs with small sizes. It reduces the number of context switches that are quite expensive.
First of all, in the most Operating Systems the size of int is 4 bytes. You can check that with:
printf("the size of int is %d\n", sizeof(int));
When you call the malloc function you allocate size at heap memory. The heap is a set aside for dynamic allocation. There's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time. Because your program is small and you have no collision in the heap you can run this for with more values that 100 and it runs too.
When you know what are you doing with malloc then you build programs with proper dynamic memory handling. When your code has improper malloc allocation then the behaviour of the program is "unknown". But you can use gdb debugger to find where the segmentation will be revealed and how the things are in heap.
malloc behaves exactly as it states, allocates n number bytes of memory, nothing more. Your code might run on your PC, but operating on non-allocated memory is undefined behavior.
A small note...
Int might not be 2 bytes, it varies on different architectures/SDKs. When you want to allocate memory for n integer elements, you should use malloc( n * sizeof( int ) ).
All in short, you manage dynamic memory with other tools that the language provides ( sizeof, realloc, free, etc. ).
C doesn't do any bounds-checking on array accesses; if you define an array of 10 elements, and attempt to write to a[99], the compiler won't do anything to stop you. The behavior is undefined, meaning the compiler isn't required to do anything in particular about that situation. It may "work" in the sense that it won't crash, but you've just clobbered something that may cause problems later on.
When doing a malloc, don't think in terms of bytes, think in terms of elements. If you want to allocate space for N integers, write
int *arr = malloc( N * sizeof *arr );
and let the compiler figure out the number of bytes.
I want to test how much the OS does allocate when I request 24M memory.
for (i = 0; i < 1024*1024; i++)
ptr = (char *)malloc(24);
When I write like this I get RES is 32M from the top command.
ptr = (char *)malloc(24*1024*1024);
But when I do a little change the RES is 244. What is the difference between them? Why is the result 244?
The allocator has its own data structures about the bookkeeping that require memory as well. When you allocate in small chunks (the first case), the allocator has to keep a lot of additional data about where each chunk is allocated and how long it is. Moreover, you may get gaps of unused memory in between the chunks because malloc has a requirement to return a sufficiently aligned block, most usually on an 8-byte boundary.
In the second case, the allocator gives you just one contiguous block and does bookkeeping only for that block.
Always be careful with a large number of small allocations, as the bookkeeping memory overhead may even outweigh the amount of the data itself.
The second allocation barely touches the memory. The allocator tells you "okay, you can have it" but if you don't actually touch the memory, the OS never actually gives it to you, hoping you'll never use it. Bit like a Ponzi scheme. On the other hand, the other method writes something (a few bytes at most) to many pages, so the OS is forced to actually give you the memory.
Try this to verify, you should get about 24m usage:
memset(ptr, 1, 1024 * 1024 * 24);
In short, top doesn't tell you how much you allocated, i.e. what you asked from malloc. It tells you what the OS allocated to your process.
In addition to what has been said:
It could be that some compilers notice how you allocate multiple 24 Byte Blocks in a loop, assigning their addresses to the same pointer and keeping only the last block you allocated, effectively rendering every other malloc from before useless. So it may optimize your whole loop into something like this:
ptr = (char *)malloc(24);
i = 1024*1024;
I basically have this piece of code.
char (* text)[1][80];
text = calloc(2821522,80);
The way I calculated it, that calloc should have allocated 215.265045 megabytes of RAM, however, the program in the end exceeded that number and allocated nearly 700mb of ram.
So it appears I cannot properly know how much memory that function will allocate.
How does one calculate that propery?
calloc (and malloc for that matter) is free to allocate as much space as it needs to satisfy the request.
So, no, you cannot tell in advance how much it will actually give you, you can only assume that it's given you the amount you asked for.
Having said that, 700M seems a little excessive so I'd be investigating whether the calloc was solely responsible for that by, for example, a program that only does the calloc and nothing more.
You might also want to investigate how you're measuring that memory usage.
For example, the following program:
#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
int main (void) {
char (* text)[1][80];
struct mallinfo mi;
mi = mallinfo(); printf ("%d\n", mi.uordblks);
text = calloc(2821522,80);
mi = mallinfo(); printf ("%d\n", mi.uordblks);
return 0;
}
outputs, on my system:
66144
225903256
meaning that the calloc has allocated 225,837,112 bytes which is only a smidgeon (115,352 bytes or 0.05%) above the requested 225,721,760.
Well it depends on the underlying implementation of malloc/calloc.
It generally works like this - there's this thing called the heap pointer which points to the top of the heap - the area from where dynamic memory gets allocated. When memory is first allocated, malloc internally requests x amount of memory from the kernel - i.e. the heap pointer increments by a certain amount to make that space available. That x may or may not be equal to the size of the memory block you requested (it might be larger to account for future mallocs). If it isn't, then you're given at least the amount of memory you requested(sometimes you're given more memory because of alignment issues). The rest is made part of an internal free list maintained by malloc. To sum it up malloc has some underlying data structures and a lot depends on how they are implemented.
My guess is that the x amount of memory was larger (for whatever reason) than you requested and hence malloc/calloc was holding on to the rest in its free list. Try allocating some more memory and see if the footprint increases.