I have a scenario where i created pipe for communication between two child and parent. Parent writes (using write function)data to the pipe and closes the respective file descriptor. The problem is when i want to write data again to the pipe, the write function is returning error code -1. I think its because writing end has been closed in previous iteration. Then how to open the corresponding file descriptor after it has been closed once.
I tried using open() function which requires path to some file as arguement. But i am not using any files in my application. I have simple file descriptors (int arr[2]).
Is it possible to achieve above scenario with pipes????
Once a pipe is closed, it's closed. You can't bring it back.
If you want to write more to it, don't close it in the first place - it's as simple as that.
Thing to know about anything related to files (pipes are also some sort of files) under unix: file name is used only on opening file. Later until file is open, it is available forever until closed and name is never used again. When someone deletes file in another window while it is open, just name is gone, not file. This means:
File is still on disk
It has no name
It is still open
When it is closed, kernel removes it forever
Knowing this maybe helps to understand, why this would be nearly impossible to "reopen" file, pipe or anything similar again. File name and descriptor have different lifetimes.
The only exceptions are stdout and stderr whose descriptor are always known as 1 and 2.
Related
I have a question please regarding what happens if I closed a file descriptor after writing into it ( e.g fd[1] after piping fd ), then opened it again to write. Will the data be overwritten and all the previous ones will be gone or it will keep on writing from the end point it stopped at after the first write?
I used the system call open() with the file descriptor and no other arguments.
If you close either of the file descriptors for a pipe, it can never be reopened. There is no name by which to reopen it. Even with /dev/fd file systems, once you close the file descriptor, the corresponding entry in the file system is removed — you're snookered.
Don't close a pipe if you might need to use it again.
Consider whether to make a duplicate of the pipe before closing; you can then either use the duplicate directly or duplicate the duplicate back to the original (pipe) file descriptor, but that's cheating; you didn't actually close all the references to the pipe's file descriptor. (Note that the process(es) at the other end of the pipe won't get an EOF indication because of the close — there's still an open file descriptor referring to the pipe.)
I am having a problem in daemonize program. The problem is after closing all the opened descriptors, i need to reopen the stdout file to print the message.
I am having one way. But that is not working.
The way is duplicate the stdout descriptor using dup and reopen that. But the deamonize function is called it closes all the file descriptors. So, that duplicate file descriptor is also closed.
Can anyone please help me to do that.
If you use daemon() to daemonize, you can specify noclose to prevent these filedescriptors from being closed:
daemon(0, 1);
But you should close these after your check by hand, otherwise your terminals might get messed up.
I have an application that creates multiple instances (processes) of itself and these processes have a shared data structure. In that struct there is a file descriptor used for logging data to file. There is a check in the logging function that checks to see if the file descriptor is -1 and if it is then it opens the file and sets the value of the shared file descriptor.
Other processes / threads do the same check but at this time the fd is != -1. So the file does not get opened. They then continue to writing to the file. The write fails most of the time and returns -1. When the write did not fail I checked the file path of the fd using readlink. The path was some other file than the log file.
I am assuming that this is because even though the file descriptor value was always 11, even in subsequent runs, that value refers to a different file for each process. So it is the eleventh file that process has open? So the log file is not even regarded as open for these processes and even if they do open the file the fd would be different.
So my question is this correct? My second question is how do I then re-implement this method given that multiple processes need to write to this log file. Would each process need to open that file.. or is there another way that is more efficient.. do I need to close the file so that other processes can open and write to it..?
EDIT:
The software is an open source software called filebench.
The file can be seen here.
Log method is filebench_log. Line 204 is the first check I mentioned where the file is opened. The write happens at line 293. The fd value is eleven among all processes and the value is the same: 11. It is actually shared through all processes and setup mostly here. The file is only opened once (verified via print statements).
The shared data struct that has the fd is called
filebench_shm
and the fd is
filebench_shm->shm_log_fd
EDIT 2:
The error message that I get is Bad file descriptor. Errno is 9.
EDIT 3:
So it seems that each process has a different index table for the fds. Wiki:
On Linux, the set of file descriptors open in a process can be accessed under the path /proc/PID/fd/, where PID is the process identifier.
So the issue that I am having is that for two processes with process IDs 101, 102 the file descriptor 11 is not the same for the two processes:
/proc/101/fd/11
/proc/102/fd/11
I have a shared data structure between these processes.. is there another way I can share an open file between them other than an fd since that doesn't work?
It seems that it would be simplest to open the file before spawning the new processes. This avoids all the coordination complexity regarding opening the file by centralizing it to one time and place.
I originally wrote this as a solution:
Create a shared memory segment.
Put the file descriptor variable in the segment.
Put a mutex semaphore in the segment
Each process accesses the file descriptor in the segment. If it is not open, lock the semaphore, check if it is open, and if not open the
file. Release the mutex.
That way all processes share the same file descriptor.
But this assumes that the the underlying file descriptor object is also in the shared memory, which I think it is not.
Instead, use the open then fork method mentioned in the other answer, or have each process open the file and use flock to serialize access when needed.
Here is the setup: I have a shared file (lets call it status.csv) that is read by many processes (lets call them consumers) in a read-only fashion. I have one producer that periodically updates status.csv by creating a temp file, writing data to it and using the C function discussed here:
http://www.gnu.org/software/libc/manual/html_node/Renaming-Files.html
to rename the temp file (effectively overwrite) to status.csv so that the consumers can process the new data. It want to try and guarantee (as much as possible in the Linux world) that the consumers won't get a malformed/corrupted/half-old/half-new status.csv file (I want them to get either all of the old data or all of the new). I can't seem to guarantee this by reading the description of rename: it seems to guarantee that the rename action itself is atomic but I want to know if a consumer already has the status.csv file open, he will continue to read the same file as it was when it was opened, even if the file is renamed/overwritten by the producer in the middle of this reading operation.
I attempted to prototype this thinking that the consumers will get some type of error or a half old/half new file but it seems to always be in the state it was when it was open by the consumer even if renamed/overwritten multiple times.
BTW, these processes are running on the same machine (RHEL 6).
Thanks!
In Linux and similar systems, if a process has a file open and the file is deleted, the file itself remains undeleted until all processes close it. All that happens immediately is that the directory entry is deleted so that it cannot be opened again.
The same thing happens if rename is used to replace an open file. The old file descriptor still keeps the old file open. However, new opens will see the new file.
Therefore, for your consumers to see the new file, they must close and reopen the file.
Note: your consumers can discover if the file has been replaced by using the stat(2) call. If either the st_dev or st_ino entries (or both) have changed, then the file has been replaced and must be closed and reopened. This is how tail -F works.
Consider the following scenario: I am opening a tar file (say abc.tar.gz), writing the data, and before closing the file descriptor, I am trying to extract the same file.
I am unable to do so. But if I extract the file after having closing the fd, it works fine.
I wonder what could be the reason.
All files has a position where data is read or written. After writing to the file, the position is at the end. Trying to read will attempt to read from that position. You have to change the position to the beginning of the file with a function like lseek.
Also, you did open the file in both read and write mode?
Edit
After reading your comments, I see you do not actually read the file from inside your program, but from an external program. Then it might be as simple as you not flushing the file to disk, which happens automatically when closing a file. You might want to check the fsync function for that, or possible the sync function.