How does memory allocation in FUSE programs work? - c

I am following an example FUSE Tutorial to understand how FUSE works in linux. In the example all the dynamic data is allocated using malloc, and passed in as user data to the fuse_main function. This data is later accessible for any fuse calls. These calls need not be from the same process. How does this work?
To make the question more clear ,
i run the main bbfs program with ../src/bbfs rootdir mountdir to mount the file system. It is in the main() of bbfs.c that the malloc is called. The bbfs program also defines several fuse function calls. But this program exits after the filesystem is mounted.
How can other programs(or the kernel) which calls read() or open() on the mounted filesystem
1.access the memory allocated using malloc by the bbfs program if it has already exited? Wouldn't the OS free up the memory allocated using malloc after the program bbfs exited?
2.access the defined functions, if the process that defined them had already exited? Where would the object code of the fuse functions reside after the process exited?
I am a bit confused about the lifetimes of the object code and the heap memory objects here and how other programs (or the kernel) use it later. Any help or pointers would be appreciated.

Most of your question is based on a false assumption:
… But [the FUSE server] exits after the filesystem is mounted.
It's not actually exiting at all. It's forking into the background and continuing to run as long as the filesystem is mounted.
While it's running, everything works as normal.

Related

When are thread function local variables allocated with Posix?

I know it's a very specific question and it's not very interesting for a high level programmer, but I would like to know when exactly are allocated the local variables of a thread function, in other words after
pthread_create(&thread, &function, ...)
is executed, can I say that they exists in memory or not (considering that the scheduler could have not executed the thread yet)?
I tried to search in the posix library code but it's not easy to understand, I arrive at the clone function, written in assembly, but than I cannot find che code of the system call service routine sys_clone to understand what exactly it does. I see in the clone code the invocation of the thread function, but I think this should happen only in the created thread (which could have never been executed by the scheduler when pthread_create is terminated) and not in the creator.
in other words after
pthread_create(&thread, &function, ...)
is executed, can I say that they exists in memory or not (considering
that the scheduler could have not executed the thread yet)?
POSIX does not give you any reason for confidence that the local variables of the initial call to function function() in the created thread will have been allocated by the time pthread_create() returns. They might or might not have been, and indeed, the answer might not even be well defined inasmuch as different threads do not necessarily have a consistent view of machine state.
There is no special significance to the local variables of a thread's start function relative to the local variables of any other function called in that thread. Moreover, although pthread_create() will not return successfully until the new thread has been created, that's a separate question from whether the start function has even been entered, much less whether its local variables have been allocated.

How to recollect memory after Control + C force quit [duplicate]

In my program written with C and C++, I will new an object to fulfill the task, then delete the object.
At the moment after new object but before delete object, if the user presses ctrl+c to break the process, that will cause delete not to be called and a memory leak occurs.
What should I do to avoid this situation?
Also, if the memory was reclaimed by the OS, what about the opened files? Are they closed by the OS or should I close them manualy?
In a virtual-memory-based system, all memory is returned to the OS when a process is terminated, regardless of whether it was freed explicitly in the application code. The same might not be true of other resources, however, which you may want to free cleanly. In which case, you need to provide a custom signal handler for the SIGINT signal (which is received on Ctrl+C), see e.g. http://linux.die.net/man/2/sigaction.
Pressing CtrlC will send a SIGINT to the process, which by default does a mostly-orderly shutdown, including tearing down the memory manager and releasing all allocated heap and stack. If you need to perform other tasks then you will need to install a SIGINT handler and perform those tasks yourself.
If you allocated any SYSV Shared Memory Segments using shmget(2) then you must clean up after yourself with shmctl(2).
If you allocated any POSIX Shared Memory Segments using shm_open(3) then you must clean up after yourself with shm_unlink(3).
Both SYSV and POSIX shared memory segments persist past process termination. You can see what persists using the ipcs(1) tool.
Of course, if you haven't used any SYSV or POSIX shared memory segments, then this is all just noise. :)
You are subscribing to a rather common misconception that heap blocks that are not freed, but still accessible at the time a program exists are leaks. This is not true. Leaked blocks are those which no pointer still references, hence they can't be freed.
Through the years of playing with (and breaking) lots of perfectly good kernels, I have never managed to sufficiently break a virtual memory manager to the point where it no longer reclaimed the entire address space of a process once it exited. Unless you are working with a kernel clearly marked as 'new and experimental', you will have better luck winning the lottery than finding a system that doesn't employ an effective virtual memory manager.
Don't put cruft in your code just to get a perfect score in Valgrind. If you have no real clean up tasks to do other than freeing memory that still has valid references, you don't need to bother. If someone throws a kill -9 to your program, you won't be able to handle it and will see the old behavior repeat.
If you have file descriptors to clean up, shared locks to relinquish, streams to flush or whatever else must happen so other processes don't miss you when you're gone, by all means take care of that. Just don't go adding code that does nothing to solve a non-problem, it just seems silly to do so.
Note
This was originally going to be a comment, but is far too long and SO frowns on writing a novel one comment at a time.
When CTRL+C is pressed in a Linux console, the SIGINT signal is sent to the application which, if the signal has no handler, will terminate the program, returning all memory to the OS. This of course would make it pointless to do any freeing of memory, since all memory will freed once the program exists. However, if you would like to handle the CTRL+C SIGINT signal (maybe to write out some last data to a file or do some other cleanup), you can use the function signal() to install a function to be called when the signal is received. Check out the man page for this function if you want to learn more.
If the process quits, a memory leak will NOT normally occur.
Most of the memory you allocate will be freed on Ctrl+C. If you see memory usage not return to its prior level, it is almost certainly caused by buffered filesystem blocks.
However, you should definitely clean things up, in particular if you have used any other types of resources:
Files created in temporary directories won't be deleted. This includes /dev/shm, leaving such a file could be considered a "memory leak".
System V or posix shared memory segments won't get thrown away when your process quits. If this bothers you, clean them up specifically. Alternatively, clean them up on a subsequent run.
Normally a leak (of a persistent or semi-persistent object e.g. file) doesn't matter if a subsequent run doesn't leak more memory. So cleaning up on a future run is good enough.
Imagine a process running every 5 minutes from "cron", if it crashes on each run and leaves some mess, it's still ok provided each run cleans up the mess from the previous crash.
The OS will reclaim the memory allocated by the process when the process exits as a result of Ctrl-C or any other means.

What happens to the parameters to execv?

I was always a bit hazy on this little bit of C magic. When you call execv, you're "replacing the process image." What exactly does that mean? Just the DATA segment? Everything allocated to the process? The stack? The heap?
My question is about what happens to the storage used by the parameters that you pass to execv? If they were local variables to the function that called execv, then they're on the stack. But if you replace the process image, and call the new process's main() function, bad things would happen when main() returned, because the stack information that points to the return location from the main call was replaced by the new process image.
Same thing for variables, yes? And what if those variables were allocated on the heap?
Inquiring minds are inquiring to anybody who knows.
The exec family of functions replace the process wholesale - data, stack, text, heap, everything. Some file descriptors can stay open (those opened by the original process without FD_CLOEXEC set). But apart from that, you pretty much get a whole new process - see the link for all the details.
What happens to the parameters you passed in is the OS's problem - it has to make sure they're passed to the new process's main function in a way that complies with the standard, but I don't think POSIX dictates exactly how it does that.
For Linux, you can look at the fs/exec.c file to see the implementation. Jump near the end (line 1484 as I post this) to look at the do_execveat_common function which is the main part of the implementation. You'll see the arguments are copied into the new address space (calls to copy_strings near the end of the function).
Just the DATA segment?
No, all memory mappings are erased and re-create for the new executable
Everything allocated to the process? The stack? The heap?
Yes, all memory. Some kernel resources, documented here, are inherited from the parent process though, such as file descriptors. These resources are managed by the kernel, and are not part of the process memory. All of this is quite operating system specific though, it can accomplish this through various means as long as it complies with the mentioned exec() documentation.
what happens to the storage used by the parameters that you pass to execv?
Typically the kernel makes a copy of those arguments, and injects them into the memory of the new executable.
But if you replace the process image, and call the new process's main() function, bad things would happen when main() returned,
No, when main() returns, that process ends. The code and memory of the original process that called exec() doesn't exist any more, there's nothing to return to.

Deallocate memory after kill in Linux

I am writing a program in C which is using a few processes, semaphores and mapped memory. When I map the memory and then the program fails in the middle of progress so it cant get to the stage when memory is released, the program is stucked and I have to kill it (Ctrl+C).
The problem is that when I fix the bug and run the program again - it calls the shared memory error or semaphores error and program is terminated. I can fix this problem only by restarting the whole OS.
Is there another way, how to "deallocate" allocated memory after unexpected error?
FYI: ipcs doesn't show this allocated memory nor the semaphores used.
EDIT: I had to tag just one "right" answer, but I would like to thank you all for the ideas. The result is that after the problem occurs, deleting everything except pulse... in /dev/shm folder is the solution.
POSIX shared memory doesn't have a specific command-line tool. But it is typically mapped into the /dev/shm tree where you can manage the segments with classic file manipulation tools.
Grapsus's comment is correct for POSIX shared memory. Forget the ipcrm, it's SysV only. Dig around in /dev/shm and remove the file that "represents" your shared memory slab.
You probably should also put in a signal handler to remove the shared memory upon a kill. It won't work if you are going to kill with SIG_KILL (9), but it will work with most of the lesser kills. Once the signal handler is in place, regular kills will call the handler which can then programmatically remove the shared memory before process shutdown.

How to avoid memory leak when user press ctrl+c under linux?

In my program written with C and C++, I will new an object to fulfill the task, then delete the object.
At the moment after new object but before delete object, if the user presses ctrl+c to break the process, that will cause delete not to be called and a memory leak occurs.
What should I do to avoid this situation?
Also, if the memory was reclaimed by the OS, what about the opened files? Are they closed by the OS or should I close them manualy?
In a virtual-memory-based system, all memory is returned to the OS when a process is terminated, regardless of whether it was freed explicitly in the application code. The same might not be true of other resources, however, which you may want to free cleanly. In which case, you need to provide a custom signal handler for the SIGINT signal (which is received on Ctrl+C), see e.g. http://linux.die.net/man/2/sigaction.
Pressing CtrlC will send a SIGINT to the process, which by default does a mostly-orderly shutdown, including tearing down the memory manager and releasing all allocated heap and stack. If you need to perform other tasks then you will need to install a SIGINT handler and perform those tasks yourself.
If you allocated any SYSV Shared Memory Segments using shmget(2) then you must clean up after yourself with shmctl(2).
If you allocated any POSIX Shared Memory Segments using shm_open(3) then you must clean up after yourself with shm_unlink(3).
Both SYSV and POSIX shared memory segments persist past process termination. You can see what persists using the ipcs(1) tool.
Of course, if you haven't used any SYSV or POSIX shared memory segments, then this is all just noise. :)
You are subscribing to a rather common misconception that heap blocks that are not freed, but still accessible at the time a program exists are leaks. This is not true. Leaked blocks are those which no pointer still references, hence they can't be freed.
Through the years of playing with (and breaking) lots of perfectly good kernels, I have never managed to sufficiently break a virtual memory manager to the point where it no longer reclaimed the entire address space of a process once it exited. Unless you are working with a kernel clearly marked as 'new and experimental', you will have better luck winning the lottery than finding a system that doesn't employ an effective virtual memory manager.
Don't put cruft in your code just to get a perfect score in Valgrind. If you have no real clean up tasks to do other than freeing memory that still has valid references, you don't need to bother. If someone throws a kill -9 to your program, you won't be able to handle it and will see the old behavior repeat.
If you have file descriptors to clean up, shared locks to relinquish, streams to flush or whatever else must happen so other processes don't miss you when you're gone, by all means take care of that. Just don't go adding code that does nothing to solve a non-problem, it just seems silly to do so.
Note
This was originally going to be a comment, but is far too long and SO frowns on writing a novel one comment at a time.
When CTRL+C is pressed in a Linux console, the SIGINT signal is sent to the application which, if the signal has no handler, will terminate the program, returning all memory to the OS. This of course would make it pointless to do any freeing of memory, since all memory will freed once the program exists. However, if you would like to handle the CTRL+C SIGINT signal (maybe to write out some last data to a file or do some other cleanup), you can use the function signal() to install a function to be called when the signal is received. Check out the man page for this function if you want to learn more.
If the process quits, a memory leak will NOT normally occur.
Most of the memory you allocate will be freed on Ctrl+C. If you see memory usage not return to its prior level, it is almost certainly caused by buffered filesystem blocks.
However, you should definitely clean things up, in particular if you have used any other types of resources:
Files created in temporary directories won't be deleted. This includes /dev/shm, leaving such a file could be considered a "memory leak".
System V or posix shared memory segments won't get thrown away when your process quits. If this bothers you, clean them up specifically. Alternatively, clean them up on a subsequent run.
Normally a leak (of a persistent or semi-persistent object e.g. file) doesn't matter if a subsequent run doesn't leak more memory. So cleaning up on a future run is good enough.
Imagine a process running every 5 minutes from "cron", if it crashes on each run and leaves some mess, it's still ok provided each run cleans up the mess from the previous crash.
The OS will reclaim the memory allocated by the process when the process exits as a result of Ctrl-C or any other means.

Resources