Memory blocks / Pages [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm currently wroking on a "memory project" at school, more precisely about dynamic memory allocation. My problem is about the Heap management.
I don't really understand the difference between memory pages and memory blocks. Please correct me if i'm wrong : The heap contains some unmapped map. When we try to allocate some memory, the asked size becomes a mapped region of the heap that we can use.
This new region seems to contain some "memory pages" of 4096 Bytes but i don't understand where the memory "blocks" are...

Memory pages are a term used in virtual memory management. It is the smallest unit addressable by the MMU (Memory Management Unit), which converts virtual addresses (to logical addresses, on x86, and) to physical addresses. For more information on pages and virtual memory management, read how x86 paging works.
Memory blocks are not that tightly tied to a certain topic. They can refer to virtually everything and are used when referring to memory colloquially (in case anybody talks OSes colloquially).
As far as I can tell, they refer to the blocks allocated by a user on the free store a.k.a. heap here. They are the memory area allocated by users of an API providing functionality to access a heap (like the C standard library with malloc, free, etc.).

Related

Is memory usage doubled when using mmap on /dev/mem? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My understanding of mmap is that when used on a file it essentially reserves space for that file in memory so that it's able to access it as fast as possible as soon as you need it. but what happens when you mmap a device like dev/mem where it IS the memory, does it then use some other memory to map that memory or is it smart enough to realize that it is mapping ram and doesn't need to store it in memory? What about if you map a RAM disk where it is still memory but it's not grouped in with the regular memory?
/dev/mem is the physical memory. It won't double your address space, it will add the amount of physical memory your machine has to your address space, but your actual memory usage will not go up.

Why is there not a function in C that returns size of memory allocated pointed to by a pointer? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
After searching around for a bit on how deallocation (via free) after allocation (via something like malloc) works, I'm left puzzled.
Reading around, sources say that when memory is allocated, it actually has one word more allocated which stores the size of the allocation. This is what free uses to deallocate the memory. But with this information arises another question which I can not find the answer to anywhere: If the size of memory allocated is stored somewhere, why is there not a function or method in C that can give us the size of memory allocated via something like malloc?
The C standard only specifies the functions needed to allocate and free memory. This means that how the heap is managed is a implementation detail of the standard library on your system. How Linux does it is likely very different from Microsoft's implementation.
That being said, sometimes these implementations expose additional functions that expose the internal state of the heap. For example, Linux has a function called malloc_usable_size which is passed a pointer to allocated memory and returns the number of bytes that may be written at that address.
Still, it's best not to depend on system specific functions like that. Your program should have some way of keeping track of how much space a malloc'ed block of memory points to.

Finding unused memory in process memory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm looking for a reliable way to find unused memory in a C program's process since I need to "inject" some data into somewhere without it corrupting anything.
Whenever I find an area with only zeros in it, that's a good sign. However, no guarantees: It can still crash. All the non-zero memory is most likely being used for sure so it cannot be overwritten reliably (most memory has some kind of data in it).
I understand that you can't really know (without having the application's source code for instance) but are there any heuristics that make sense such as choosing certain segments or memory looking a certain way? Since the data can be 200KB this is rather large and finding an appropriate address range can be difficult/tedious.
Allocating memory via OS functions doesn't work in this context.
Without deep knowledge of a remote process you cannot know that any memory that is actually allocated to that process is 'unused'.
Just finding writable memory (regardless of current contents) is asking to crash the process or worse.
Asking the OS to allocate some more memory in the other process is the way to go, that way you know the memory is not used by the process and the process won't receive that address through an allocation of its own.

Dynamic memory allocation using malloc() [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How is memory allocated using malloc() ? Who allocates the memory OS or the compiler? Once the memory is freed using free() can it be used by other processes ?
In an OS there are 4 memory regions Heap,Stack,Text and Data. When you use malloc the OS provides the memory from the heap region. Compiler isn t responsible for allocating this memory. When you use free the memory block is returned back to the heap.
Usually, the heap memory is directly supplied by a runtime sub-allocator underneath whatever the OS provides. The sub-allocator is process-specific and does not require a kernel call. If the heap needs more, it has to resort to a syscall to get another chunk from the OS.
It's implementation-specific whether the sub-allocator ever releases chunks back to the OS.

Advantages/disadvantages of mapping a whole file vs. blocks when needed [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What are the advantages/disadvantages of mapping a whole file once vs. mapping large blocks when needed in an algorithm?
Intuitively, I would say it makes most sense just to map the whole file and then let the OS take care of reading/writing to disk when needed instead of making a lot of system calls since the OS is not actually reading the mapped file before accessed. At least on a 64 bit system where the address space isn't an issue.
Some context:
This is for an external priority heap developed during a course on I/O algorithms. Our measurements shows that is slightly better to just map the whole underlying file instead of mapping blocks (nodes in a tree) when needed. However, our professor does not trust our measurements and says that he didn't expect that behaviour. Is there anything we are missing?
We are using mmap with PROT_READ | PROT_WRITE and MAP_SHARED.
Thanks,
Lasse
If you have the VM space, just map the whole file. As others have already said, this allows the OS the maximum flexibility to do read-aheads, or even if it doesn't it will bring in the data as required through page faults which are much more efficient than system calls as they're being made when the system is already in the right kernel context.
Not sure why your professor doesn't expect that behaviour, but would be good to understand his rationale?

Resources