How to recollect memory after Control + C force quit [duplicate] - c

In my program written with C and C++, I will new an object to fulfill the task, then delete the object.
At the moment after new object but before delete object, if the user presses ctrl+c to break the process, that will cause delete not to be called and a memory leak occurs.
What should I do to avoid this situation?
Also, if the memory was reclaimed by the OS, what about the opened files? Are they closed by the OS or should I close them manualy?

In a virtual-memory-based system, all memory is returned to the OS when a process is terminated, regardless of whether it was freed explicitly in the application code. The same might not be true of other resources, however, which you may want to free cleanly. In which case, you need to provide a custom signal handler for the SIGINT signal (which is received on Ctrl+C), see e.g. http://linux.die.net/man/2/sigaction.

Pressing CtrlC will send a SIGINT to the process, which by default does a mostly-orderly shutdown, including tearing down the memory manager and releasing all allocated heap and stack. If you need to perform other tasks then you will need to install a SIGINT handler and perform those tasks yourself.

If you allocated any SYSV Shared Memory Segments using shmget(2) then you must clean up after yourself with shmctl(2).
If you allocated any POSIX Shared Memory Segments using shm_open(3) then you must clean up after yourself with shm_unlink(3).
Both SYSV and POSIX shared memory segments persist past process termination. You can see what persists using the ipcs(1) tool.
Of course, if you haven't used any SYSV or POSIX shared memory segments, then this is all just noise. :)

You are subscribing to a rather common misconception that heap blocks that are not freed, but still accessible at the time a program exists are leaks. This is not true. Leaked blocks are those which no pointer still references, hence they can't be freed.
Through the years of playing with (and breaking) lots of perfectly good kernels, I have never managed to sufficiently break a virtual memory manager to the point where it no longer reclaimed the entire address space of a process once it exited. Unless you are working with a kernel clearly marked as 'new and experimental', you will have better luck winning the lottery than finding a system that doesn't employ an effective virtual memory manager.
Don't put cruft in your code just to get a perfect score in Valgrind. If you have no real clean up tasks to do other than freeing memory that still has valid references, you don't need to bother. If someone throws a kill -9 to your program, you won't be able to handle it and will see the old behavior repeat.
If you have file descriptors to clean up, shared locks to relinquish, streams to flush or whatever else must happen so other processes don't miss you when you're gone, by all means take care of that. Just don't go adding code that does nothing to solve a non-problem, it just seems silly to do so.
Note
This was originally going to be a comment, but is far too long and SO frowns on writing a novel one comment at a time.

When CTRL+C is pressed in a Linux console, the SIGINT signal is sent to the application which, if the signal has no handler, will terminate the program, returning all memory to the OS. This of course would make it pointless to do any freeing of memory, since all memory will freed once the program exists. However, if you would like to handle the CTRL+C SIGINT signal (maybe to write out some last data to a file or do some other cleanup), you can use the function signal() to install a function to be called when the signal is received. Check out the man page for this function if you want to learn more.

If the process quits, a memory leak will NOT normally occur.
Most of the memory you allocate will be freed on Ctrl+C. If you see memory usage not return to its prior level, it is almost certainly caused by buffered filesystem blocks.
However, you should definitely clean things up, in particular if you have used any other types of resources:
Files created in temporary directories won't be deleted. This includes /dev/shm, leaving such a file could be considered a "memory leak".
System V or posix shared memory segments won't get thrown away when your process quits. If this bothers you, clean them up specifically. Alternatively, clean them up on a subsequent run.
Normally a leak (of a persistent or semi-persistent object e.g. file) doesn't matter if a subsequent run doesn't leak more memory. So cleaning up on a future run is good enough.
Imagine a process running every 5 minutes from "cron", if it crashes on each run and leaves some mess, it's still ok provided each run cleans up the mess from the previous crash.

The OS will reclaim the memory allocated by the process when the process exits as a result of Ctrl-C or any other means.

Related

Should I free allocated memory when I exit?

I am writing a c terminal program that runs until the user terminates it with Ctrl+C. Think something like ping or top.
My program allocates to the heap but starts no other threads or processes. Should I be handling SIGINT and freeing any allocated memory before exit or is leaving it to the OS better practice?
The short answer is yes given your context, which is a normal exit situation. In an abnormal exit situation, then the short answer is absolutely no.
If you are concerned that your program is leaking memory during its execution, which is a bad thing in the sense that it slows your program execution, then you can keep track of the memory that you allocate and then free it before you exit. Then you can run your program with valgrind and if valgrind complains about blocks that weren't free'd, then you will know you have some type of leak. The location of the allocation will help you know if the leak is of any importance.
If you exit anyway, you don't need to release any resources. The OS will take care of it just fine, and there is no benefit in doing it manually.
Note that free() is not async-safe, so you would definitely have to do the actual freeing in the main thread, and not in the handler. But don't do that, unless you want to do other things than exit().
Use SIGINT handlers for things like resetting the terminal (e.g. with ncurses), or saving critical state.

What happens with unjoined and not detached threads when the whole process terminates?

It is expected that threads, on which pthread_detach() was not called, should be pthread_join()ed before the main thread returns from main() or calls exit().
However, what happens when this requirement is not met? What happens when a process terminates when it still contains unjoined and not detached threads?
I would find it odd to learn that these other threads’ resources will not be reclaimed until system reboot. However, if these resources will be reclaimed, then there may be little need to bother about joining or detaching, mightn’t it?
It is up to the operating system. Typical modern operating systems will indeed reclaim the memory and descriptors (handles) used by abandoned threads. This is similar to how dynamically allocated memory works: typical modern systems will reclaim it when a process exits, even if the process never explicitly freed the memory. For certain unusual programs, this can be a meaningful performance optimization, because freeing lots of small resources takes time and the OS may be able to do it more quickly.
However, what happens when this requirement is not met? What happens when a process terminates when it still contains unjoined and not detached threads?
On any system with POSIX threads that is not ancient, the non-joined threads simply "evaporate" into space when the SYS_exit system call is performed by the main thread.
I would find it odd to learn that these other threads’ resources will not be reclaimed until system reboot.
They will be.
However, if these resources will be reclaimed, then there may be little need to bother about joining or detaching, mightn’t it?
It depends on what these threads do. The danger is at-exit data races.
In C++, global variables are destructed (usually via atexit or equivalent registration mechanism), FILE handles are deleted, etc. etc.
If non-joined thread tries to access any such resource, it will likely crash with SIGSEGV, possibly producing core dump, and an unclean process exit code, both of which are often quite undesirable.

Deallocate memory after kill in Linux

I am writing a program in C which is using a few processes, semaphores and mapped memory. When I map the memory and then the program fails in the middle of progress so it cant get to the stage when memory is released, the program is stucked and I have to kill it (Ctrl+C).
The problem is that when I fix the bug and run the program again - it calls the shared memory error or semaphores error and program is terminated. I can fix this problem only by restarting the whole OS.
Is there another way, how to "deallocate" allocated memory after unexpected error?
FYI: ipcs doesn't show this allocated memory nor the semaphores used.
EDIT: I had to tag just one "right" answer, but I would like to thank you all for the ideas. The result is that after the problem occurs, deleting everything except pulse... in /dev/shm folder is the solution.
POSIX shared memory doesn't have a specific command-line tool. But it is typically mapped into the /dev/shm tree where you can manage the segments with classic file manipulation tools.
Grapsus's comment is correct for POSIX shared memory. Forget the ipcrm, it's SysV only. Dig around in /dev/shm and remove the file that "represents" your shared memory slab.
You probably should also put in a signal handler to remove the shared memory upon a kill. It won't work if you are going to kill with SIG_KILL (9), but it will work with most of the lesser kills. Once the signal handler is in place, regular kills will call the handler which can then programmatically remove the shared memory before process shutdown.

freeing memory inside a signal handler

I am writing an API that uses sockets. In the API, I allocate memory for various items. I want to make sure I close the sockets and free the memory in case there is a signal such as Ctrl-C. In researching this, it appears free() is not on the safe function list (man 7 signal) thus, I can't free the memory inside a signal handler. I can close the socket just fine though. Does any have any thoughts on how I can free the memory? Thank you in advance for your time.
Alternatively, don't catch the signal and just let the OS handle the cleanup as it's going to do during process cleanup anyway. You're not releasing any resources that aren't tied directly to the process, so there's no particular need to manually release them.
One technique (others exist too):
Have your program run a main processing loop.
Have your main processing loop check a flag to see if it should "keep running".
Have your signal handler simply set the "keep running" flag to false, but not otherwise terminate the program.
Have your main processing loop do the memory cleanup prior to exiting.
This has the benefit of placing both the allocation and de-allocation in blocks of code which are called with a known sequence. Doing so can be a godsend when dealing with webs of interrelated objects, and there is not going to be race condition between two processing flows trying to mess with the same object.
Don't free in the handler. Instead, indicate to your program that something needs to be freed. Then, detect that in you program, so you can free from the main context, instead of the signal context.
Are you writing a library or an application? If you're writing a library, you have no business installing signal handlers, which would conflict with the calling application. It's the application's business to handle such signals, if it wants to, and then make the appropriate cleanup calls to your library (from outside a signal-handler context).
Of course even if you're writing an application, there's no reason to handle SIGINT to close sockets and free memory. The only reasons to handle the signal are if you don't want to terminate, or if you have unsaved data or shared state (like stuff in shared memory or the filesystem) that needs to be cleaned up before terminating. Freeing memory or closing file descriptors that are used purely by your own process are not tasks you need to perform when exiting.

How to avoid memory leak when user press ctrl+c under linux?

In my program written with C and C++, I will new an object to fulfill the task, then delete the object.
At the moment after new object but before delete object, if the user presses ctrl+c to break the process, that will cause delete not to be called and a memory leak occurs.
What should I do to avoid this situation?
Also, if the memory was reclaimed by the OS, what about the opened files? Are they closed by the OS or should I close them manualy?
In a virtual-memory-based system, all memory is returned to the OS when a process is terminated, regardless of whether it was freed explicitly in the application code. The same might not be true of other resources, however, which you may want to free cleanly. In which case, you need to provide a custom signal handler for the SIGINT signal (which is received on Ctrl+C), see e.g. http://linux.die.net/man/2/sigaction.
Pressing CtrlC will send a SIGINT to the process, which by default does a mostly-orderly shutdown, including tearing down the memory manager and releasing all allocated heap and stack. If you need to perform other tasks then you will need to install a SIGINT handler and perform those tasks yourself.
If you allocated any SYSV Shared Memory Segments using shmget(2) then you must clean up after yourself with shmctl(2).
If you allocated any POSIX Shared Memory Segments using shm_open(3) then you must clean up after yourself with shm_unlink(3).
Both SYSV and POSIX shared memory segments persist past process termination. You can see what persists using the ipcs(1) tool.
Of course, if you haven't used any SYSV or POSIX shared memory segments, then this is all just noise. :)
You are subscribing to a rather common misconception that heap blocks that are not freed, but still accessible at the time a program exists are leaks. This is not true. Leaked blocks are those which no pointer still references, hence they can't be freed.
Through the years of playing with (and breaking) lots of perfectly good kernels, I have never managed to sufficiently break a virtual memory manager to the point where it no longer reclaimed the entire address space of a process once it exited. Unless you are working with a kernel clearly marked as 'new and experimental', you will have better luck winning the lottery than finding a system that doesn't employ an effective virtual memory manager.
Don't put cruft in your code just to get a perfect score in Valgrind. If you have no real clean up tasks to do other than freeing memory that still has valid references, you don't need to bother. If someone throws a kill -9 to your program, you won't be able to handle it and will see the old behavior repeat.
If you have file descriptors to clean up, shared locks to relinquish, streams to flush or whatever else must happen so other processes don't miss you when you're gone, by all means take care of that. Just don't go adding code that does nothing to solve a non-problem, it just seems silly to do so.
Note
This was originally going to be a comment, but is far too long and SO frowns on writing a novel one comment at a time.
When CTRL+C is pressed in a Linux console, the SIGINT signal is sent to the application which, if the signal has no handler, will terminate the program, returning all memory to the OS. This of course would make it pointless to do any freeing of memory, since all memory will freed once the program exists. However, if you would like to handle the CTRL+C SIGINT signal (maybe to write out some last data to a file or do some other cleanup), you can use the function signal() to install a function to be called when the signal is received. Check out the man page for this function if you want to learn more.
If the process quits, a memory leak will NOT normally occur.
Most of the memory you allocate will be freed on Ctrl+C. If you see memory usage not return to its prior level, it is almost certainly caused by buffered filesystem blocks.
However, you should definitely clean things up, in particular if you have used any other types of resources:
Files created in temporary directories won't be deleted. This includes /dev/shm, leaving such a file could be considered a "memory leak".
System V or posix shared memory segments won't get thrown away when your process quits. If this bothers you, clean them up specifically. Alternatively, clean them up on a subsequent run.
Normally a leak (of a persistent or semi-persistent object e.g. file) doesn't matter if a subsequent run doesn't leak more memory. So cleaning up on a future run is good enough.
Imagine a process running every 5 minutes from "cron", if it crashes on each run and leaves some mess, it's still ok provided each run cleans up the mess from the previous crash.
The OS will reclaim the memory allocated by the process when the process exits as a result of Ctrl-C or any other means.

Resources