Self Destructing Process Unix C - c

I want to delete an executable after I start the process.
I tried by putting unlink and it work fine, but I want my executable to continue running.
Is using the unlink approach correct? Are there any issues with using this approach?

On Unix, there shouldn't be any problems unlinking the executable of a running process.
When you unlink a file, the directory entry is removed, but the inode and the underlying data are not freed until all existing references to the file (i.e. hard links and open handles) are released.

Related

c/c++ flock as mutex on linux not robust to file delete

File locking in C using flock is commonly used to implement cross-platform cooperative inter-process locking/mutex.
I have an implementation (mac/linux/win) that works well, but it is not robust to file deletion (at least under Linux).
One or more process have started creating and using lockfile (/tmp/lockfile) and cooperatively interlock on a shared resource with it.
Some time later, I manually delete the lockfile (rm /tmp/lockfile). The running process keep on cooperating with each other, but any new process that wants to start using the same resource lock and lockfile breaks the overall mutex logic. It will create a new version of the /tmp/lockfile that is somehow different to the one already in used in already running process.
What can be done to prevent the lockfile from being unlinked while any process has it open?
What other solutions can be used?
I can't use a semaphore because I need the lock to self-release if the owning process crashes.
You can indeed use a semaphore. Linux provides the flag SEM_UNDO, which will undo the semaphore-operation on process termination (see semop(2)).
The rm command does not actually delete files. Rather, it unlinks them from the file system, as if via the unlink(2) syscall. The files will not be removed from disk as long as any process holds them open (or any other hard links to them exist), and processes that do hold them open continue to refer to the same file, even though it no longer appears in directory listings. Nothing prevents another file from being created and linked to the file system at the same place as the previous one, but that is an altogether different file. This behavior is desirable for consistent program behavior, and some programs intentionally use it to their advantage for managing temporary files.
There is nothing you can do to prevent a process with sufficient privileges from unlinking the lock file. Any process that has sufficient privilege to create the lock file has sufficient privilege to unlink it, with the consequences you describe. One usually mitigates this problem by creating a temporary file with an unpredictable name for use with flock(), so that the file name or an open file handle must be exchanged between processes that want to synchronize actions by locking that file. For the particular case of child processes, you can rely on the child inheriting open file descriptors from its parent to enable them to get at the lock file even if it has been unlinked.
On the other hand, if you are relying on a lock file with a well-known name then the solution may be to create the file in advance, make root its owner and the owner of every directory in the path of hard links leading to it, and deny all other users write access the file and the directories. You could consider further wrapping it up with mandatory access controls (SELinux policy) if you wanted to be even more careful.

How to rename a file on windows if multiple process already opened that in C on windows

I want to take backup of a file once it reaches a particular size. The file is associated with stdout stream for multiple process.
I doubt if freopen can be used in this case as it is associated with multiple process.
Though it may succeed with the process which executing that, other process references for stdout going to NULL on UNIX platform. I don't know how it behave on windows.
Renaming that file also not allowing on windows as multiple process are opened that file.
Is there any way to forcefully rename of the file.? On Unix, if I move the file all the process are referring the moved file. This also fine, later I have a mechanism to inform other process to reopen the file.
How to achieve this on windows with C programming.

Too many open files in system while not actually opening any files

I am developing a backup utility and I am getting the error:
Too many open files in system
after it runs for a while. The error is returned by stat().
Since I am not actually opening any files (fopen()), my question is if any of the following functions (which I am using) take up a file descriptor, and if so, what can I do to release it?
getwd()
chdir()
mkdir()
stat()
time()
The functions you listed are safe; none of them return anything that you could "close".
To find out more, run the command lsof -p + PID of your backup process. That will give you a list of files which the process has opened which in turn will give you an idea what is going on.
See: lsof manual page.

Preventing threads from writing to the same file

I'm implementing an FTP-like protocol in Linux kernel 2.4 (homework), and I was under the impression that if a file is open for writing any subsequent attempt to open it by another thread should fail, until I actually tried it and discovered it goes through.
How do I prevent this from happening?
PS: I'm using open() to open the file.
PS2: I need to be able to access existing files. I just want to prevent them being written to simultaneously.
You could keep a list of open files, and then before opening a file check to see if it has already been opened by another thread. Some issues with this approach are:
You will need to use a synchronization primitive such as a Mutex to ensure the list is thread-safe.
Files will need to be removed from the list once your program is finished with them.
System-level file locking is process-based, so you cannot use it. You will need to use process-level locking. For example, by defining a mutex (lock) using pthreads.
Use the O_CREATE and O_EXCL flags to open(). That way the call will fail if the file already exists.

Release all open files and directories at exit forcefully in C

I want to release all open files and directories forecefully my program has opened during its execution. I want to do this because I have a very big program and it opens many files and directories which I am not able to keep track. Is there any way to do it? means I want to retrieve a list of all open files and directories and close them at exit.
I know registration of exit handlers using atexit() function. Can something be done with it?
Edit:
I have cygwin on windows. I want to do the above thing because my program's resources are not being automatically released. I have a directory which is created and then opened using opendir(). After my program finishes, when I try to delete that directory, it says "cant delete, being used by another program". But when I terminate explorer.exe and again restart, then only I am able to delete that directory.
The problem is it is happening unevenly. I am able to delete some directories and not able to delete some.
If by "release" you mean "close", that will happen anyway. The C runtime and the operating system will take care of that for you. On process termination all resources the process had open will be closed off. It would be a very rare (and poor quality) environment that didn't do that.
Also check _fcloseall()
man signal should give you some hints. This catches most of termination signals, not only normal ones.

Resources