How application memory is freed in operating systems? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I was going through the common issues in memory management. I'm curious to know how is the memory managed in the following case with dynamic memory management in c:
I request the memory manager to provide me a free portion through malloc() call.
I performed some computations and stored a portion of the data on that section of memory.
The memory allocated in not freed.
How does the memory grow on the application. Does it keeps growing (of the some GUI element whose data container is not cleared once allocated).Does it grows each time i open the application untill program terminates (though it is allocated in the normals fashion using some DMA functions).
Will the segment of memory be freed by application during runtime or os doesnt cares of the memory mamagement in such cases ??

OS keeps track of which physical pages of RAM are referenced by which processes, and which pages are free. The exact data-structure can differ based on the OS, so it doesn't really matter. What matters is that when the OS needs to give your process a physical page of RAM, it can allocate it from the pool of free pages. When the process dies, the pages that aren't used anymore can be reclaimed and marked as 'free' again, to be used in future memory allocations.
This is how it works in a nut shell.

When you start a program, the OS allocates some amount of memory to it. While the program runs, it might request more (with malloc() and the like). When the program terminates normally or is killed, ALL of its memory is freed back to the OS. "Memory leaks" are an issue only for very large programs that stay open for a long time and continually allocate more memory, like a web browser. If such a program, say, requests more memory every time it displays a page, does not free it, and stays open, then it can indeed grow to the point where it causes problems for the OS.

Related

Enforcing the type of medium on a Virtual Memory system [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I'm designing a software application that requires high bandwidth / low latency memory transfers to operate properly.
My OS uses Virtual Memory addressing.
Is there a way to enforce the variables (that I choose) to be located in DDR and not on the hard drive for example?
You're conflating virtual memory with swap memory: Virtual memory just means, that the address space in which a process operates is an abstraction that presents a very orderly structure, while the actual physical address space is occupied in almost a chaotic manner. And yes, virtual memory is part of memory page swapping, but it's not a synonym for it.
One way to achieve what you want is to simply turn off page swapping for the whole system. It can also be done for specific parts of virtual address space. But before I explain you how to that, I need to tell you this:
You're approaching this from the wrong angle. The system main memory banks you're referring to as DDR (which is just a particular transfer clocking mode, BTW) are just one level in a whole hierarchy of memory. But actually even system main memory is slow compared to the computational throughput of processors. And this has been so since the dawn of computing. This is why computers have cache memory; small amounts of fast memory. And on modern architectures these caches also form the interface between caching hierarchy layers.
If you perform a memory operation on a modern CPU, this memory operation will hit the cache. If it's a read and the cache is hot, the cache will deliver, otherwise it escalates the operation to the next layer. Writes will affect only the caches on the short term and only propagate to main memory through cache eviction or explicit memory barriers.
Normally you don't want to interfere with the decisions an OS takes regarding virtual memory management; you'll hardly able to outsmart it. If you have a bunch of data sitting in memory which you access at a high frequency, then the memory management will see that and don't even consider paging out that part of memory. I think I'll have to write that out again, in clear words: On every modern OS, regions of memory that are in active and repeated use will not be paged out. If swapping happens, then, because the system is running out of memory and tries to juggle stuff around. This is called Thrashing and locking pages into memory will not help against it; all it will do is forcing the OS to go around and kill processes that hog memory (likely your process) to get some breathing space.
Anyway, if you really feel you want to lock pages into memory, have a look at that mlock(2) syscall.
As far as I can tell, there is no way to force certain variables to be stored in DDR vs. HDD, when virtual-memory handles the memory translations. What you can do is to configure your operating system to use different types of secondary storage for virtual memory - such as solid state disks, HDD, etc.

What If There Is A Memory Leak In Virtual Environment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
What I understood from reading some web articles is that just like any other program, the host OS allocates X amount of memory to virtual OS and when I start any program on virtual OS, the virtual OS fetches the exact amount of memory needed for the program.
When I shut the virtual OS down, it returns the allocated memory to the host OS.
But what happens if there is a memory leakage in the virtual OS environment? I am starting to learn C, and my professor says that in dynamic memory allocation operations, permanent leakage can happen in the host OS.
But what if it happens in virtual environment? I guess the program will give back ALL of the memory allocated to the host OS when I shut it down, right? What happens when I start the virtual host again the next time? Does the memory leakage show up there permanently?
Just getting afraid before I even start writing my first program in C.
P. S. If I use websites like Repl.it and use memory allocation over there, will it cause damage to my system still?
Memory leak can occur when you allocate some memory (with malloc in C) and you never free that memory, this can happen for a number of reasons.
Now the important thing to understand is that this allocated memory will be released once the process is finished running.
When you setup your VM you set the maximum amount of memory it can consume. When you shut down your VM it will also be released.
You can't cause a "permanent" memory leakage if the program you write doesn't run. If the OS has some always running service with memory leak than it will slow down when it is out of memory but when you restart, all the memory will be released again.
So don't let this stop you, you can't damage your computer and you can always recover it by exiting the program. (or restarting the PC in a worst case scenario
)
EDIT:
As it was mentioned in the comments there is a special scenario when you leak shared memory, in this case exiting the program might not release the memory but I consider this the worst case scenario and a reboot will solve this problem as well. (still not permanent)
This answer is meant to provide a different view point, in addition to the good answer(s) and comments, which I agree with.
I am trying to see the worst case, i.e a way how you could get what you fear.
You probably have an environment which does not match the following assumptions, in that case my construct does of course not apply.
your virtual OS supports "persistence"
(i.e. you can shut it down in a "hibernate" way, it can start with the same running processes and their restored memory content)
your virtualisation engine also supports persistence of the virtual OS
shutting down for persistence in virtual OS is possible with a process occupying a critical amount of memory (sanity checks could prevent this)
virtualisation engine also does not mind the depleted memory and allows persistence
you choose to use persistent shutdown,
rebooting the virtual OS normally would include killing the evil process and reclaiming the memory (this is discussed by other answers and comments, but thanks to MrBlaise for proposing the clarification here)
In this circumstance I imagine that you can have:
a process which has taken (and ran out of) all avaiable memory
but has not crashed or otherwise triggered emergency measures
then this situation is saved for persistence before shutting down, successfully
then you restart the virtual OS
it restores the previous situation, i.e. returns from hiberantion
the restored previous situation contains a still/again running process which has taken all memory
I think this will still only affect the virtual OS, not the host.
Please note that I intentionally made all necessary assumptions just to get the situation you are afraid of. Some of the assumptions are quite "daring".
I imagine for example that anything supporting persistence should have sanity checks, which at least detect the memory issue and ask how to handle.
(By the way, I do not know about virtualisation engines which support persistence, neither whether any do not support it. I am thinking in the generic, theoretical area.
In case I have invented the persistence for virtualisation engines (can't believe it), I claim this as prior art. ;-))

Buffer overflow exploit not working for kernel module [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have done buffer overflow exploits for user level processes in the past. However this does not seem to work good if I try to overflow the buffer of a vulnerable kernel module. Here's what I do:
There is a vulnerable kernel module which I can open as a file and read/write to it. The write operation is done without bounds checking. So I do a write operation and overflow the buffer and overwrite the return address as the address of an environment variable that has my shellcode. But something is going wrong. The kernel crashes and after rebooting I opened /var/log/messages and find that the eip is correctly pointing to the address I overwrote. But still it crashes saying "Unable to handle kernel null pointer dereference at virtual address"
Any reason why this would happen? Why wouldn't the control be redirected to a overwritten return address?
Note: I ran this on redhat enterprise linux with exec-shield and ASLR turned off.
The kernel can't jump to user addresses without performing a kernel exit, since it is running in a privileged mode with a different configuration than userspace (e.g. different paging tables, CPU permission bits, etc.).
So, in order to get shellcode into the kernel, you'd have to pack the shellcode into the buffer written to the driver (and copied to the kernel), and somehow get the address of it. This is not so hard if you have access to the kernel map, but recently Linux distributions have begun locking access to such sensitive information to make it harder to exploit kernel bugs.

How to setup programmable ram disk without root permissions on linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to setup and configure a ram-disk from within my C application. Is it possible?
From what I understand, a ram-disk can be setup, mounted and resized only by the root.
My application would not have that priviledge.
Is there any alternate to ram-disk which can be programmed, if it's not possible with ram-disk?
The purpose is to get data available across multiple applications which run at different times and over the network. Since the data is huge(~100-150 GB), ram-disk implementation from within the application would keep the data in memory and the next application would just use it. This would save the expensive writing to and reading from the hard disk of the huge data.
Would appreciate help on this.
Edit: A little more clarity on the problem statement. Process A runs on machine1 and writes data of about 100GB on machine2 over NFS and exits. The process B runs on machine1 and reads this data (100GB) from machine2 over NFS. The writing and reading of this huge data is turning out to be the bottleneck. How can I reduce this?
Use shm_open to create a named shared memory object, followed by ftruncate to set the size you need. You can then mmap part or all of it for writing, close it, and again shm_open it (using the same name) and mmap it in another process later. Once you're done with it, you can shm_unlink it.
Use a regular file, but memory map it. That way, the second process can access it just as easily as the first. The OS caching will take care of keeping the "hot" parts of the file in RAM.
Update, based on your mention of NFS. Look for caching settings in Linux, increase them to be very, very aggressive, so the kernel caches and avoids writing back to disk (or NFS) as much as possible.
The solution for you would be to use use shared memory (e.g. with mmap). To circumvent the problem that your two process do not run at the same time introduce an additional process (call it the "ramdisk"-process). That runs permanent and keeps the memory map alive, while your other process can connect to it.
Usually you setup a ram-disk using admin tools and use it in your program as a normal filesystem. To share data between different processes you could use shared-memory.
I'm not sure what you want to achieve by loading 150GB into memory (are you sure you have that much RAM?).
Ten years ago, I tried to put c-header files into a ram-disk to speed-up compilation, unfortunatly this had no measureable effect, because the normal file system caches them already.

Memory Protection without MMU

I would like to know how memory can be protected without MMU support. I have tried to google it, but have not seen any worthwile papers or research on it. And those which deal with it only deals it for bugs, such as uninitialized pointers and not memory corruption due to a soft error, that is, due to a hardware transient fault corrupting an instruction that writes to a memory location.
The reason I want to know this is because I am working on a proprietary manycore platform without any Memory Protection. Now my question is, can software be used to protect memory, especially for wild writes due to soft erros (as opposed to mistakes by a programmer). Any help on this would be really appreciated.
If you're looking for Runtime memory protection the sane only option is hardware support. Hardware is the only way to intervene in a bad memory access before it can cause damage. Any software solution would be vulnerable to the very memory errors it is trying to protect against.
With software you could possibly implement a verification/detection scheme. You could periodically check portions of memory that the currently running program should not have access and see if they have changed (probably by CRCing these areas). But of course if the rogue program damages the area where the checksums are held, or where the checking program's code is held, then all bets are off.
Even this software checking solution would be more of a debugging utility than a permanent runtime protection. It is likely that a device with no MMU is a small embedded device which won't have the spare cycles to be constantly checking the device's memory.
Usually devices without MMUs are designed to run a single program with no kernel or anything else, and thus there is nothing to protect. If you need to run multiple programs and feel you need protection, you probably need a more advanced piece of hardware that supports the kind of features you're looking for.
If you want software implemented memory protection, then you will need support from your compiler and its associated libraries. I expect that there is one compiler only on this platform and so you should contact the vendor. I wouldn't hold out much hope for a positive response. Even if they had such tools, I would expect the performance of software memory protection to be unacceptable.
MMU less systems are present in several embedded solutions.
The memory is managed by the kernel code. The entire memory (heap) is divided into heap lists of various sizes (heap lists can be of sizes 4 bytes, 8 bytes, 16 bytes ..... upto 1024 bytes)and there's a header attached to each heap block that tells whether the particular heap block is taken or not. So, that when u need to assign a new heap block, you can browse through the heap lists and see which heap blocks are free and can assign them to the requesting application. And the same is the case when you free a particular sized heap block, the headers of that block are updated to reflect that it has been freed.
Now, this implementation has to take care of the scenario when the application requested a particular size of heap block and that size of heap list is full. In that case you break up a block from the next size of heap list or join together smaller sized heap blocks and add to the requested sized heap list.
The implementation is much simpler than it seems.
Depends on what application platform will run. There is technology called Type-Safe Language (ATS, for instance) which can protect from software errors. And such languages may have good performance (again ATS, for instance).

Resources